Test Report: KVM_Linux_crio 19643

                    
                      17d31f5d116bbb5d9ac8f4a1c2873ea47cdfa40f:2024-09-14:36211
                    
                

Test fail (29/320)

Order failed test Duration
33 TestAddons/parallel/Registry 74.1
34 TestAddons/parallel/Ingress 154.39
36 TestAddons/parallel/MetricsServer 323.49
164 TestMultiControlPlane/serial/StopSecondaryNode 141.89
166 TestMultiControlPlane/serial/RestartSecondaryNode 53.26
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 349.8
171 TestMultiControlPlane/serial/StopCluster 141.85
231 TestMultiNode/serial/RestartKeepsNodes 323.88
233 TestMultiNode/serial/StopMultiNode 141.29
240 TestPreload 211.44
248 TestKubernetesUpgrade 359.64
290 TestStartStop/group/old-k8s-version/serial/FirstStart 269.13
298 TestStartStop/group/no-preload/serial/Stop 139.18
301 TestStartStop/group/embed-certs/serial/Stop 138.98
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 105.04
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.96
315 TestStartStop/group/old-k8s-version/serial/SecondStart 709.35
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.38
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.3
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.44
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.59
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 442.31
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.69
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 342.6
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 171.47
x
+
TestAddons/parallel/Registry (74.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.681542ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004267479s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003614869s
addons_test.go:342: (dbg) Run:  kubectl --context addons-996992 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-996992 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-996992 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.089811864s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-996992 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 ip
2024/09/14 16:55:59 [DEBUG] GET http://192.168.39.189:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-996992 -n addons-996992
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 logs -n 25: (1.37080269s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p download-only-119677                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-119677                                                                     | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | -o=json --download-only                                                                     | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | -p download-only-357716                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-357716                                                                     | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-119677                                                                     | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-357716                                                                     | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-539617 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-539617                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35769                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-539617                                                                     | binary-mirror-539617 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-996992 --wait=true                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:54 UTC | 14 Sep 24 16:54 UTC |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-996992 ssh cat                                                                       | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | /opt/local-path-provisioner/pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-996992 ssh curl -s                                                                   | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-996992 ip                                                                            | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:44:27.658554   16725 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:44:27.659049   16725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:27.659100   16725 out.go:358] Setting ErrFile to fd 2...
	I0914 16:44:27.659118   16725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:27.659608   16725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 16:44:27.660666   16725 out.go:352] Setting JSON to false
	I0914 16:44:27.661546   16725 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1612,"bootTime":1726330656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:44:27.661646   16725 start.go:139] virtualization: kvm guest
	I0914 16:44:27.663699   16725 out.go:177] * [addons-996992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 16:44:27.665028   16725 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 16:44:27.665051   16725 notify.go:220] Checking for updates...
	I0914 16:44:27.667815   16725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:44:27.669277   16725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:44:27.670590   16725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:27.671878   16725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 16:44:27.673058   16725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 16:44:27.674650   16725 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:44:27.706805   16725 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 16:44:27.708321   16725 start.go:297] selected driver: kvm2
	I0914 16:44:27.708336   16725 start.go:901] validating driver "kvm2" against <nil>
	I0914 16:44:27.708348   16725 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 16:44:27.709072   16725 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:27.709158   16725 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 16:44:27.723953   16725 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 16:44:27.724008   16725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:44:27.724241   16725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:44:27.724270   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:44:27.724306   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:27.724316   16725 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:44:27.724367   16725 start.go:340] cluster config:
	{Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:27.724463   16725 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:27.726351   16725 out.go:177] * Starting "addons-996992" primary control-plane node in "addons-996992" cluster
	I0914 16:44:27.727435   16725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:27.727477   16725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 16:44:27.727486   16725 cache.go:56] Caching tarball of preloaded images
	I0914 16:44:27.727583   16725 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 16:44:27.727595   16725 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 16:44:27.727895   16725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json ...
	I0914 16:44:27.727914   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json: {Name:mk5b5d945e87f410628fe80d3ffbea824c8cc516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:27.728052   16725 start.go:360] acquireMachinesLock for addons-996992: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 16:44:27.728097   16725 start.go:364] duration metric: took 32.087µs to acquireMachinesLock for "addons-996992"
	I0914 16:44:27.728117   16725 start.go:93] Provisioning new machine with config: &{Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 16:44:27.728170   16725 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 16:44:27.730533   16725 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 16:44:27.730741   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:44:27.730798   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:44:27.745035   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0914 16:44:27.745492   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:44:27.746094   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:44:27.746115   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:44:27.746439   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:44:27.746641   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:27.746794   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:27.746933   16725 start.go:159] libmachine.API.Create for "addons-996992" (driver="kvm2")
	I0914 16:44:27.746958   16725 client.go:168] LocalClient.Create starting
	I0914 16:44:27.746993   16725 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 16:44:27.859328   16725 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 16:44:27.966294   16725 main.go:141] libmachine: Running pre-create checks...
	I0914 16:44:27.966316   16725 main.go:141] libmachine: (addons-996992) Calling .PreCreateCheck
	I0914 16:44:27.966771   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:27.967192   16725 main.go:141] libmachine: Creating machine...
	I0914 16:44:27.967205   16725 main.go:141] libmachine: (addons-996992) Calling .Create
	I0914 16:44:27.967357   16725 main.go:141] libmachine: (addons-996992) Creating KVM machine...
	I0914 16:44:27.968635   16725 main.go:141] libmachine: (addons-996992) DBG | found existing default KVM network
	I0914 16:44:27.969364   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:27.969186   16746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0914 16:44:27.969389   16725 main.go:141] libmachine: (addons-996992) DBG | created network xml: 
	I0914 16:44:27.969403   16725 main.go:141] libmachine: (addons-996992) DBG | <network>
	I0914 16:44:27.969414   16725 main.go:141] libmachine: (addons-996992) DBG |   <name>mk-addons-996992</name>
	I0914 16:44:27.969476   16725 main.go:141] libmachine: (addons-996992) DBG |   <dns enable='no'/>
	I0914 16:44:27.969509   16725 main.go:141] libmachine: (addons-996992) DBG |   
	I0914 16:44:27.969524   16725 main.go:141] libmachine: (addons-996992) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0914 16:44:27.969537   16725 main.go:141] libmachine: (addons-996992) DBG |     <dhcp>
	I0914 16:44:27.969546   16725 main.go:141] libmachine: (addons-996992) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0914 16:44:27.969553   16725 main.go:141] libmachine: (addons-996992) DBG |     </dhcp>
	I0914 16:44:27.969560   16725 main.go:141] libmachine: (addons-996992) DBG |   </ip>
	I0914 16:44:27.969567   16725 main.go:141] libmachine: (addons-996992) DBG |   
	I0914 16:44:27.969572   16725 main.go:141] libmachine: (addons-996992) DBG | </network>
	I0914 16:44:27.969578   16725 main.go:141] libmachine: (addons-996992) DBG | 
	I0914 16:44:27.975466   16725 main.go:141] libmachine: (addons-996992) DBG | trying to create private KVM network mk-addons-996992 192.168.39.0/24...
	I0914 16:44:28.040012   16725 main.go:141] libmachine: (addons-996992) DBG | private KVM network mk-addons-996992 192.168.39.0/24 created
	I0914 16:44:28.040038   16725 main.go:141] libmachine: (addons-996992) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 ...
	I0914 16:44:28.040051   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.039977   16746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:28.040070   16725 main.go:141] libmachine: (addons-996992) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 16:44:28.040122   16725 main.go:141] libmachine: (addons-996992) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 16:44:28.289089   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.288934   16746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa...
	I0914 16:44:28.557850   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.557726   16746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/addons-996992.rawdisk...
	I0914 16:44:28.557884   16725 main.go:141] libmachine: (addons-996992) DBG | Writing magic tar header
	I0914 16:44:28.557899   16725 main.go:141] libmachine: (addons-996992) DBG | Writing SSH key tar header
	I0914 16:44:28.557913   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.557851   16746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 ...
	I0914 16:44:28.557943   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992
	I0914 16:44:28.557987   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 (perms=drwx------)
	I0914 16:44:28.558007   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 16:44:28.558018   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 16:44:28.558031   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:28.558047   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 16:44:28.558057   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 16:44:28.558068   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 16:44:28.558078   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 16:44:28.558086   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 16:44:28.558098   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins
	I0914 16:44:28.558109   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home
	I0914 16:44:28.558118   16725 main.go:141] libmachine: (addons-996992) DBG | Skipping /home - not owner
	I0914 16:44:28.558148   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 16:44:28.558185   16725 main.go:141] libmachine: (addons-996992) Creating domain...
	I0914 16:44:28.559360   16725 main.go:141] libmachine: (addons-996992) define libvirt domain using xml: 
	I0914 16:44:28.559383   16725 main.go:141] libmachine: (addons-996992) <domain type='kvm'>
	I0914 16:44:28.559393   16725 main.go:141] libmachine: (addons-996992)   <name>addons-996992</name>
	I0914 16:44:28.559399   16725 main.go:141] libmachine: (addons-996992)   <memory unit='MiB'>4000</memory>
	I0914 16:44:28.559405   16725 main.go:141] libmachine: (addons-996992)   <vcpu>2</vcpu>
	I0914 16:44:28.559409   16725 main.go:141] libmachine: (addons-996992)   <features>
	I0914 16:44:28.559414   16725 main.go:141] libmachine: (addons-996992)     <acpi/>
	I0914 16:44:28.559420   16725 main.go:141] libmachine: (addons-996992)     <apic/>
	I0914 16:44:28.559425   16725 main.go:141] libmachine: (addons-996992)     <pae/>
	I0914 16:44:28.559431   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559437   16725 main.go:141] libmachine: (addons-996992)   </features>
	I0914 16:44:28.559443   16725 main.go:141] libmachine: (addons-996992)   <cpu mode='host-passthrough'>
	I0914 16:44:28.559448   16725 main.go:141] libmachine: (addons-996992)   
	I0914 16:44:28.559462   16725 main.go:141] libmachine: (addons-996992)   </cpu>
	I0914 16:44:28.559469   16725 main.go:141] libmachine: (addons-996992)   <os>
	I0914 16:44:28.559475   16725 main.go:141] libmachine: (addons-996992)     <type>hvm</type>
	I0914 16:44:28.559489   16725 main.go:141] libmachine: (addons-996992)     <boot dev='cdrom'/>
	I0914 16:44:28.559500   16725 main.go:141] libmachine: (addons-996992)     <boot dev='hd'/>
	I0914 16:44:28.559505   16725 main.go:141] libmachine: (addons-996992)     <bootmenu enable='no'/>
	I0914 16:44:28.559525   16725 main.go:141] libmachine: (addons-996992)   </os>
	I0914 16:44:28.559531   16725 main.go:141] libmachine: (addons-996992)   <devices>
	I0914 16:44:28.559537   16725 main.go:141] libmachine: (addons-996992)     <disk type='file' device='cdrom'>
	I0914 16:44:28.559545   16725 main.go:141] libmachine: (addons-996992)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/boot2docker.iso'/>
	I0914 16:44:28.559550   16725 main.go:141] libmachine: (addons-996992)       <target dev='hdc' bus='scsi'/>
	I0914 16:44:28.559555   16725 main.go:141] libmachine: (addons-996992)       <readonly/>
	I0914 16:44:28.559560   16725 main.go:141] libmachine: (addons-996992)     </disk>
	I0914 16:44:28.559567   16725 main.go:141] libmachine: (addons-996992)     <disk type='file' device='disk'>
	I0914 16:44:28.559574   16725 main.go:141] libmachine: (addons-996992)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 16:44:28.559584   16725 main.go:141] libmachine: (addons-996992)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/addons-996992.rawdisk'/>
	I0914 16:44:28.559589   16725 main.go:141] libmachine: (addons-996992)       <target dev='hda' bus='virtio'/>
	I0914 16:44:28.559595   16725 main.go:141] libmachine: (addons-996992)     </disk>
	I0914 16:44:28.559604   16725 main.go:141] libmachine: (addons-996992)     <interface type='network'>
	I0914 16:44:28.559614   16725 main.go:141] libmachine: (addons-996992)       <source network='mk-addons-996992'/>
	I0914 16:44:28.559622   16725 main.go:141] libmachine: (addons-996992)       <model type='virtio'/>
	I0914 16:44:28.559630   16725 main.go:141] libmachine: (addons-996992)     </interface>
	I0914 16:44:28.559636   16725 main.go:141] libmachine: (addons-996992)     <interface type='network'>
	I0914 16:44:28.559648   16725 main.go:141] libmachine: (addons-996992)       <source network='default'/>
	I0914 16:44:28.559656   16725 main.go:141] libmachine: (addons-996992)       <model type='virtio'/>
	I0914 16:44:28.559660   16725 main.go:141] libmachine: (addons-996992)     </interface>
	I0914 16:44:28.559667   16725 main.go:141] libmachine: (addons-996992)     <serial type='pty'>
	I0914 16:44:28.559674   16725 main.go:141] libmachine: (addons-996992)       <target port='0'/>
	I0914 16:44:28.559684   16725 main.go:141] libmachine: (addons-996992)     </serial>
	I0914 16:44:28.559695   16725 main.go:141] libmachine: (addons-996992)     <console type='pty'>
	I0914 16:44:28.559713   16725 main.go:141] libmachine: (addons-996992)       <target type='serial' port='0'/>
	I0914 16:44:28.559728   16725 main.go:141] libmachine: (addons-996992)     </console>
	I0914 16:44:28.559768   16725 main.go:141] libmachine: (addons-996992)     <rng model='virtio'>
	I0914 16:44:28.559789   16725 main.go:141] libmachine: (addons-996992)       <backend model='random'>/dev/random</backend>
	I0914 16:44:28.559798   16725 main.go:141] libmachine: (addons-996992)     </rng>
	I0914 16:44:28.559805   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559810   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559815   16725 main.go:141] libmachine: (addons-996992)   </devices>
	I0914 16:44:28.559820   16725 main.go:141] libmachine: (addons-996992) </domain>
	I0914 16:44:28.559826   16725 main.go:141] libmachine: (addons-996992) 
	I0914 16:44:28.565929   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:0d:74:be in network default
	I0914 16:44:28.566532   16725 main.go:141] libmachine: (addons-996992) Ensuring networks are active...
	I0914 16:44:28.566561   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:28.567152   16725 main.go:141] libmachine: (addons-996992) Ensuring network default is active
	I0914 16:44:28.567386   16725 main.go:141] libmachine: (addons-996992) Ensuring network mk-addons-996992 is active
	I0914 16:44:28.567808   16725 main.go:141] libmachine: (addons-996992) Getting domain xml...
	I0914 16:44:28.568374   16725 main.go:141] libmachine: (addons-996992) Creating domain...
	I0914 16:44:30.007186   16725 main.go:141] libmachine: (addons-996992) Waiting to get IP...
	I0914 16:44:30.007842   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.008313   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.008349   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.008249   16746 retry.go:31] will retry after 193.278123ms: waiting for machine to come up
	I0914 16:44:30.203743   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.204360   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.204412   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.204193   16746 retry.go:31] will retry after 245.945466ms: waiting for machine to come up
	I0914 16:44:30.451736   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.452098   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.452129   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.452044   16746 retry.go:31] will retry after 422.043703ms: waiting for machine to come up
	I0914 16:44:30.875457   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.875934   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.875960   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.875878   16746 retry.go:31] will retry after 473.34114ms: waiting for machine to come up
	I0914 16:44:31.350215   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:31.350612   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:31.350631   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:31.350576   16746 retry.go:31] will retry after 628.442164ms: waiting for machine to come up
	I0914 16:44:31.980705   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:31.981327   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:31.981357   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:31.981288   16746 retry.go:31] will retry after 929.748342ms: waiting for machine to come up
	I0914 16:44:32.912801   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:32.913219   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:32.913246   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:32.913169   16746 retry.go:31] will retry after 956.954722ms: waiting for machine to come up
	I0914 16:44:33.871239   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:33.871624   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:33.871655   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:33.871611   16746 retry.go:31] will retry after 1.433739833s: waiting for machine to come up
	I0914 16:44:35.307302   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:35.307687   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:35.307721   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:35.307633   16746 retry.go:31] will retry after 1.515973944s: waiting for machine to come up
	I0914 16:44:36.826018   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:36.826451   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:36.826473   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:36.826405   16746 retry.go:31] will retry after 1.946747568s: waiting for machine to come up
	I0914 16:44:38.775169   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:38.775648   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:38.775676   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:38.775602   16746 retry.go:31] will retry after 2.771653383s: waiting for machine to come up
	I0914 16:44:41.550519   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:41.550927   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:41.550947   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:41.550892   16746 retry.go:31] will retry after 2.637789254s: waiting for machine to come up
	I0914 16:44:44.190450   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:44.190859   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:44.190881   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:44.190814   16746 retry.go:31] will retry after 3.734364168s: waiting for machine to come up
	I0914 16:44:47.926668   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:47.927158   16725 main.go:141] libmachine: (addons-996992) Found IP for machine: 192.168.39.189
	I0914 16:44:47.927179   16725 main.go:141] libmachine: (addons-996992) Reserving static IP address...
	I0914 16:44:47.927192   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has current primary IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:47.927576   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find host DHCP lease matching {name: "addons-996992", mac: "52:54:00:dd:8c:90", ip: "192.168.39.189"} in network mk-addons-996992
	I0914 16:44:48.085073   16725 main.go:141] libmachine: (addons-996992) DBG | Getting to WaitForSSH function...
	I0914 16:44:48.085105   16725 main.go:141] libmachine: (addons-996992) Reserved static IP address: 192.168.39.189
	I0914 16:44:48.085119   16725 main.go:141] libmachine: (addons-996992) Waiting for SSH to be available...
	I0914 16:44:48.087828   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.088171   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.088203   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.088326   16725 main.go:141] libmachine: (addons-996992) DBG | Using SSH client type: external
	I0914 16:44:48.088342   16725 main.go:141] libmachine: (addons-996992) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa (-rw-------)
	I0914 16:44:48.088390   16725 main.go:141] libmachine: (addons-996992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 16:44:48.088422   16725 main.go:141] libmachine: (addons-996992) DBG | About to run SSH command:
	I0914 16:44:48.088437   16725 main.go:141] libmachine: (addons-996992) DBG | exit 0
	I0914 16:44:48.222175   16725 main.go:141] libmachine: (addons-996992) DBG | SSH cmd err, output: <nil>: 
	I0914 16:44:48.222479   16725 main.go:141] libmachine: (addons-996992) KVM machine creation complete!
	I0914 16:44:48.222803   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:48.250845   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:48.251150   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:48.251340   16725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 16:44:48.251369   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:44:48.253045   16725 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 16:44:48.253064   16725 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 16:44:48.253072   16725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 16:44:48.253081   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.255661   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.256049   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.256068   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.256226   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.256426   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.256654   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.256795   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.256982   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.257155   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.257164   16725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 16:44:48.365411   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:48.365433   16725 main.go:141] libmachine: Detecting the provisioner...
	I0914 16:44:48.365440   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.368483   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.368906   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.368927   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.369091   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.369277   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.369448   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.369560   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.369706   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.369917   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.369928   16725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 16:44:48.478560   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 16:44:48.478635   16725 main.go:141] libmachine: found compatible host: buildroot
	I0914 16:44:48.478650   16725 main.go:141] libmachine: Provisioning with buildroot...
	I0914 16:44:48.478673   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.478938   16725 buildroot.go:166] provisioning hostname "addons-996992"
	I0914 16:44:48.478968   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.479154   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.481754   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.482027   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.482055   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.482238   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.482421   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.482594   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.482715   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.482893   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.483075   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.483090   16725 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-996992 && echo "addons-996992" | sudo tee /etc/hostname
	I0914 16:44:48.603822   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-996992
	
	I0914 16:44:48.603851   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.606556   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.606910   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.606934   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.607103   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.607290   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.607488   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.607658   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.607848   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.608066   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.608093   16725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-996992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-996992/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-996992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 16:44:48.722348   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:48.722378   16725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 16:44:48.722396   16725 buildroot.go:174] setting up certificates
	I0914 16:44:48.722422   16725 provision.go:84] configureAuth start
	I0914 16:44:48.722433   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.722689   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:48.725429   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.725795   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.725827   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.725999   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.728098   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.728440   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.728459   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.728608   16725 provision.go:143] copyHostCerts
	I0914 16:44:48.728683   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 16:44:48.728797   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 16:44:48.728852   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 16:44:48.728919   16725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.addons-996992 san=[127.0.0.1 192.168.39.189 addons-996992 localhost minikube]
	I0914 16:44:48.792378   16725 provision.go:177] copyRemoteCerts
	I0914 16:44:48.792464   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 16:44:48.792493   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.795239   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.795658   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.795697   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.795972   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.796149   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.796365   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.796523   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:48.880497   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 16:44:48.905386   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 16:44:48.927284   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 16:44:48.949470   16725 provision.go:87] duration metric: took 227.034076ms to configureAuth
	I0914 16:44:48.949496   16725 buildroot.go:189] setting minikube options for container-runtime
	I0914 16:44:48.949667   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:44:48.949749   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.952388   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.952770   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.952792   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.953000   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.953189   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.953319   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.953445   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.953626   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.953785   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.953798   16725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 16:44:49.180693   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 16:44:49.180719   16725 main.go:141] libmachine: Checking connection to Docker...
	I0914 16:44:49.180727   16725 main.go:141] libmachine: (addons-996992) Calling .GetURL
	I0914 16:44:49.182000   16725 main.go:141] libmachine: (addons-996992) DBG | Using libvirt version 6000000
	I0914 16:44:49.184271   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.184718   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.184747   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.184859   16725 main.go:141] libmachine: Docker is up and running!
	I0914 16:44:49.184872   16725 main.go:141] libmachine: Reticulating splines...
	I0914 16:44:49.184879   16725 client.go:171] duration metric: took 21.437913259s to LocalClient.Create
	I0914 16:44:49.184951   16725 start.go:167] duration metric: took 21.438013433s to libmachine.API.Create "addons-996992"
	I0914 16:44:49.184967   16725 start.go:293] postStartSetup for "addons-996992" (driver="kvm2")
	I0914 16:44:49.184983   16725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 16:44:49.185012   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.185343   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 16:44:49.185366   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.187583   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.187883   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.187924   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.188038   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.188258   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.188488   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.188629   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.274153   16725 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 16:44:49.278523   16725 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 16:44:49.278558   16725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 16:44:49.278639   16725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 16:44:49.278670   16725 start.go:296] duration metric: took 93.694384ms for postStartSetup
	I0914 16:44:49.278701   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:49.279309   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:49.281961   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.282293   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.282334   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.282507   16725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json ...
	I0914 16:44:49.282702   16725 start.go:128] duration metric: took 21.554522556s to createHost
	I0914 16:44:49.282723   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.284816   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.285125   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.285161   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.285299   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.285489   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.285616   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.285768   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.285889   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:49.286051   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:49.286060   16725 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 16:44:49.394658   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726332289.368573436
	
	I0914 16:44:49.394680   16725 fix.go:216] guest clock: 1726332289.368573436
	I0914 16:44:49.394687   16725 fix.go:229] Guest: 2024-09-14 16:44:49.368573436 +0000 UTC Remote: 2024-09-14 16:44:49.28271319 +0000 UTC m=+21.657617847 (delta=85.860246ms)
	I0914 16:44:49.394705   16725 fix.go:200] guest clock delta is within tolerance: 85.860246ms
	I0914 16:44:49.394710   16725 start.go:83] releasing machines lock for "addons-996992", held for 21.66660282s
	I0914 16:44:49.394730   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.394985   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:49.397445   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.397817   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.397843   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.398094   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398597   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398755   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398864   16725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 16:44:49.398917   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.398947   16725 ssh_runner.go:195] Run: cat /version.json
	I0914 16:44:49.398966   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.401354   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401636   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.401658   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401728   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401838   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.402091   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.402285   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.402338   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.402362   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.402400   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.402603   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.402786   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.402964   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.403097   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.519392   16725 ssh_runner.go:195] Run: systemctl --version
	I0914 16:44:49.525764   16725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 16:44:49.694011   16725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 16:44:49.699486   16725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 16:44:49.699547   16725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 16:44:49.714748   16725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 16:44:49.714768   16725 start.go:495] detecting cgroup driver to use...
	I0914 16:44:49.714822   16725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 16:44:49.729936   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 16:44:49.743531   16725 docker.go:217] disabling cri-docker service (if available) ...
	I0914 16:44:49.743604   16725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 16:44:49.756964   16725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 16:44:49.770590   16725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 16:44:49.893965   16725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 16:44:50.044352   16725 docker.go:233] disabling docker service ...
	I0914 16:44:50.044415   16725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 16:44:50.059044   16725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 16:44:50.073286   16725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 16:44:50.194594   16725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 16:44:50.308467   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 16:44:50.322485   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:50.339320   16725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 16:44:50.339388   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.348795   16725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 16:44:50.348884   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.358384   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.367798   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.377342   16725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 16:44:50.387564   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.397380   16725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.414038   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.424719   16725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 16:44:50.433951   16725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 16:44:50.434029   16725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 16:44:50.446639   16725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 16:44:50.456388   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:50.574976   16725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 16:44:50.661035   16725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 16:44:50.661113   16725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 16:44:50.665670   16725 start.go:563] Will wait 60s for crictl version
	I0914 16:44:50.665731   16725 ssh_runner.go:195] Run: which crictl
	I0914 16:44:50.669237   16725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 16:44:50.707163   16725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 16:44:50.707267   16725 ssh_runner.go:195] Run: crio --version
	I0914 16:44:50.732866   16725 ssh_runner.go:195] Run: crio --version
	I0914 16:44:50.760540   16725 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 16:44:50.761520   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:50.764201   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:50.764600   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:50.764627   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:50.764836   16725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 16:44:50.768563   16725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:50.780282   16725 kubeadm.go:883] updating cluster {Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 16:44:50.780403   16725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:50.780449   16725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 16:44:50.811100   16725 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 16:44:50.811171   16725 ssh_runner.go:195] Run: which lz4
	I0914 16:44:50.815020   16725 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 16:44:50.818901   16725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 16:44:50.818932   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 16:44:51.986671   16725 crio.go:462] duration metric: took 1.171676547s to copy over tarball
	I0914 16:44:51.986742   16725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 16:44:54.089407   16725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.102639006s)
	I0914 16:44:54.089436   16725 crio.go:469] duration metric: took 2.102736316s to extract the tarball
	I0914 16:44:54.089444   16725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 16:44:54.127982   16725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 16:44:54.168690   16725 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 16:44:54.168718   16725 cache_images.go:84] Images are preloaded, skipping loading
	I0914 16:44:54.168726   16725 kubeadm.go:934] updating node { 192.168.39.189 8443 v1.31.1 crio true true} ...
	I0914 16:44:54.168840   16725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-996992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 16:44:54.168921   16725 ssh_runner.go:195] Run: crio config
	I0914 16:44:54.213151   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:44:54.213177   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:54.213187   16725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 16:44:54.213208   16725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-996992 NodeName:addons-996992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 16:44:54.213406   16725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-996992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 16:44:54.213473   16725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 16:44:54.223204   16725 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 16:44:54.223288   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 16:44:54.233103   16725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0914 16:44:54.248690   16725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 16:44:54.264306   16725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0914 16:44:54.280174   16725 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I0914 16:44:54.283808   16725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:54.295236   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:54.407554   16725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:44:54.423857   16725 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992 for IP: 192.168.39.189
	I0914 16:44:54.423885   16725 certs.go:194] generating shared ca certs ...
	I0914 16:44:54.423899   16725 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.424055   16725 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 16:44:54.653328   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt ...
	I0914 16:44:54.653357   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt: {Name:mk83d7136889857d4ed25b0dba1b2df29c745e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.653511   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key ...
	I0914 16:44:54.653521   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key: {Name:mkf6a9abc7e34a97c99f2a5ec51dc983ba6352f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.653592   16725 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 16:44:54.763073   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt ...
	I0914 16:44:54.763103   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt: {Name:mk4ef09caad655cf68088badaf279bd208978abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.763267   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key ...
	I0914 16:44:54.763279   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key: {Name:mk3a507b5dffcb94432777f7f3e5733be1c0f3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.763357   16725 certs.go:256] generating profile certs ...
	I0914 16:44:54.763409   16725 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key
	I0914 16:44:54.763424   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt with IP's: []
	I0914 16:44:54.910505   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt ...
	I0914 16:44:54.910543   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: {Name:mk09179ed269a97b87aa12bc79284cfddef8c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.910700   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key ...
	I0914 16:44:54.910712   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key: {Name:mk74eedc746dd9fd7a750c2f3d02305cb8619c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.910777   16725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca
	I0914 16:44:54.910796   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189]
	I0914 16:44:55.208240   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca ...
	I0914 16:44:55.208270   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca: {Name:mka09606e42dd1ecc4ea29944564740a07d14b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.208415   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca ...
	I0914 16:44:55.208427   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca: {Name:mkbcdd45d86dc41d397758dcbac5534936ad83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.208527   16725 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt
	I0914 16:44:55.208613   16725 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key
	I0914 16:44:55.208661   16725 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key
	I0914 16:44:55.208677   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt with IP's: []
	I0914 16:44:55.276375   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt ...
	I0914 16:44:55.276402   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt: {Name:mkf139a671d75a23c54568782300fb890e1af9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.276575   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key ...
	I0914 16:44:55.276588   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key: {Name:mkf3356386ba33ec54d5db11fd3dfe25bd2233d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.276748   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 16:44:55.276779   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 16:44:55.276803   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 16:44:55.276825   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 16:44:55.277400   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 16:44:55.303836   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 16:44:55.325577   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 16:44:55.348012   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 16:44:55.371496   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 16:44:55.393703   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 16:44:55.416084   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 16:44:55.438231   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 16:44:55.461207   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 16:44:55.484035   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 16:44:55.499790   16725 ssh_runner.go:195] Run: openssl version
	I0914 16:44:55.505113   16725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 16:44:55.515170   16725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.519587   16725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.519665   16725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.525286   16725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 16:44:55.535581   16725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 16:44:55.539357   16725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 16:44:55.539419   16725 kubeadm.go:392] StartCluster: {Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:55.539594   16725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 16:44:55.539672   16725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 16:44:55.575978   16725 cri.go:89] found id: ""
	I0914 16:44:55.576057   16725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 16:44:55.585788   16725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 16:44:55.595409   16725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 16:44:55.604391   16725 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 16:44:55.604417   16725 kubeadm.go:157] found existing configuration files:
	
	I0914 16:44:55.604464   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 16:44:55.612932   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 16:44:55.613006   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 16:44:55.621580   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 16:44:55.629773   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 16:44:55.629834   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 16:44:55.638432   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 16:44:55.646743   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 16:44:55.646820   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 16:44:55.655625   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 16:44:55.663901   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 16:44:55.663966   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 16:44:55.672657   16725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 16:44:55.725872   16725 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 16:44:55.725960   16725 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 16:44:55.830107   16725 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 16:44:55.830268   16725 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 16:44:55.830418   16725 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 16:44:55.839067   16725 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 16:44:55.872082   16725 out.go:235]   - Generating certificates and keys ...
	I0914 16:44:55.872184   16725 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 16:44:55.872270   16725 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 16:44:56.094669   16725 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 16:44:56.228851   16725 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 16:44:56.361198   16725 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 16:44:56.439341   16725 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 16:44:56.528538   16725 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 16:44:56.528694   16725 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-996992 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0914 16:44:56.706339   16725 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 16:44:56.706543   16725 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-996992 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0914 16:44:56.783275   16725 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 16:44:56.956298   16725 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 16:44:57.088304   16725 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 16:44:57.088427   16725 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 16:44:57.464241   16725 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 16:44:57.635302   16725 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 16:44:57.910383   16725 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 16:44:58.013201   16725 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 16:44:58.248188   16725 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 16:44:58.250774   16725 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 16:44:58.253067   16725 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 16:44:58.254997   16725 out.go:235]   - Booting up control plane ...
	I0914 16:44:58.255104   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 16:44:58.255191   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 16:44:58.255668   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 16:44:58.271031   16725 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 16:44:58.280477   16725 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 16:44:58.280530   16725 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 16:44:58.407134   16725 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 16:44:58.407301   16725 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 16:44:58.908397   16725 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.392958ms
	I0914 16:44:58.908509   16725 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 16:45:04.906474   16725 kubeadm.go:310] [api-check] The API server is healthy after 6.002177937s
	I0914 16:45:04.924613   16725 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 16:45:04.939822   16725 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 16:45:04.973453   16725 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 16:45:04.973676   16725 kubeadm.go:310] [mark-control-plane] Marking the node addons-996992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 16:45:04.986235   16725 kubeadm.go:310] [bootstrap-token] Using token: shp2dh.uruxonhtmw8h7ze1
	I0914 16:45:04.987488   16725 out.go:235]   - Configuring RBAC rules ...
	I0914 16:45:04.987689   16725 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 16:45:04.996042   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 16:45:05.007370   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 16:45:05.010610   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 16:45:05.017711   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 16:45:05.022294   16725 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 16:45:05.314010   16725 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 16:45:05.751385   16725 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 16:45:06.313096   16725 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 16:45:06.313132   16725 kubeadm.go:310] 
	I0914 16:45:06.313225   16725 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 16:45:06.313238   16725 kubeadm.go:310] 
	I0914 16:45:06.313395   16725 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 16:45:06.313413   16725 kubeadm.go:310] 
	I0914 16:45:06.313440   16725 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 16:45:06.313497   16725 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 16:45:06.313558   16725 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 16:45:06.313572   16725 kubeadm.go:310] 
	I0914 16:45:06.313771   16725 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 16:45:06.313800   16725 kubeadm.go:310] 
	I0914 16:45:06.313867   16725 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 16:45:06.313881   16725 kubeadm.go:310] 
	I0914 16:45:06.313921   16725 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 16:45:06.314006   16725 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 16:45:06.314098   16725 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 16:45:06.314108   16725 kubeadm.go:310] 
	I0914 16:45:06.314233   16725 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 16:45:06.314351   16725 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 16:45:06.314360   16725 kubeadm.go:310] 
	I0914 16:45:06.314447   16725 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token shp2dh.uruxonhtmw8h7ze1 \
	I0914 16:45:06.314568   16725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 16:45:06.314616   16725 kubeadm.go:310] 	--control-plane 
	I0914 16:45:06.314625   16725 kubeadm.go:310] 
	I0914 16:45:06.314722   16725 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 16:45:06.314730   16725 kubeadm.go:310] 
	I0914 16:45:06.314828   16725 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token shp2dh.uruxonhtmw8h7ze1 \
	I0914 16:45:06.314969   16725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 16:45:06.315496   16725 kubeadm.go:310] W0914 16:44:55.704880     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:45:06.315862   16725 kubeadm.go:310] W0914 16:44:55.705784     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:45:06.315978   16725 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 16:45:06.315991   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:45:06.315997   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:45:06.317740   16725 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 16:45:06.319057   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 16:45:06.331920   16725 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 16:45:06.353277   16725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 16:45:06.353350   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:06.353388   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-996992 minikube.k8s.io/updated_at=2024_09_14T16_45_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=addons-996992 minikube.k8s.io/primary=true
	I0914 16:45:06.375471   16725 ops.go:34] apiserver oom_adj: -16
	I0914 16:45:06.504882   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:07.005141   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:07.505774   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:08.005050   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:08.505830   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:09.005575   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:09.505807   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.005492   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.504986   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.621672   16725 kubeadm.go:1113] duration metric: took 4.268383123s to wait for elevateKubeSystemPrivileges
	I0914 16:45:10.621717   16725 kubeadm.go:394] duration metric: took 15.082301818s to StartCluster
	I0914 16:45:10.621740   16725 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:45:10.621915   16725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:45:10.622431   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:45:10.622689   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 16:45:10.622711   16725 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 16:45:10.622769   16725 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 16:45:10.622896   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:45:10.622926   16725 addons.go:69] Setting helm-tiller=true in profile "addons-996992"
	I0914 16:45:10.622941   16725 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-996992"
	I0914 16:45:10.622950   16725 addons.go:69] Setting cloud-spanner=true in profile "addons-996992"
	I0914 16:45:10.622957   16725 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-996992"
	I0914 16:45:10.622897   16725 addons.go:69] Setting yakd=true in profile "addons-996992"
	I0914 16:45:10.622964   16725 addons.go:234] Setting addon cloud-spanner=true in "addons-996992"
	I0914 16:45:10.622970   16725 addons.go:69] Setting ingress-dns=true in profile "addons-996992"
	I0914 16:45:10.622976   16725 addons.go:234] Setting addon yakd=true in "addons-996992"
	I0914 16:45:10.622983   16725 addons.go:234] Setting addon ingress-dns=true in "addons-996992"
	I0914 16:45:10.622996   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623004   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623021   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.622933   16725 addons.go:69] Setting storage-provisioner=true in profile "addons-996992"
	I0914 16:45:10.623123   16725 addons.go:234] Setting addon storage-provisioner=true in "addons-996992"
	I0914 16:45:10.623142   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623344   16725 addons.go:69] Setting volumesnapshots=true in profile "addons-996992"
	I0914 16:45:10.623366   16725 addons.go:234] Setting addon volumesnapshots=true in "addons-996992"
	I0914 16:45:10.623392   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623393   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623426   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.623459   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623483   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623506   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.623518   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.622951   16725 addons.go:234] Setting addon helm-tiller=true in "addons-996992"
	I0914 16:45:10.622917   16725 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-996992"
	I0914 16:45:10.623622   16725 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-996992"
	I0914 16:45:10.622926   16725 addons.go:69] Setting registry=true in profile "addons-996992"
	I0914 16:45:10.623646   16725 addons.go:234] Setting addon registry=true in "addons-996992"
	I0914 16:45:10.622961   16725 addons.go:69] Setting ingress=true in profile "addons-996992"
	I0914 16:45:10.623658   16725 addons.go:234] Setting addon ingress=true in "addons-996992"
	I0914 16:45:10.623672   16725 addons.go:69] Setting volcano=true in profile "addons-996992"
	I0914 16:45:10.623683   16725 addons.go:234] Setting addon volcano=true in "addons-996992"
	I0914 16:45:10.622914   16725 addons.go:69] Setting inspektor-gadget=true in profile "addons-996992"
	I0914 16:45:10.623704   16725 addons.go:69] Setting default-storageclass=true in profile "addons-996992"
	I0914 16:45:10.623713   16725 addons.go:234] Setting addon inspektor-gadget=true in "addons-996992"
	I0914 16:45:10.623717   16725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-996992"
	I0914 16:45:10.622909   16725 addons.go:69] Setting metrics-server=true in profile "addons-996992"
	I0914 16:45:10.623726   16725 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-996992"
	I0914 16:45:10.622926   16725 addons.go:69] Setting gcp-auth=true in profile "addons-996992"
	I0914 16:45:10.623734   16725 addons.go:234] Setting addon metrics-server=true in "addons-996992"
	I0914 16:45:10.623757   16725 mustload.go:65] Loading cluster: addons-996992
	I0914 16:45:10.623769   16725 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-996992"
	I0914 16:45:10.623852   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623914   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623984   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624008   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624067   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624232   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624260   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624329   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624403   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624403   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624463   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624746   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624786   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624834   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624904   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625011   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625036   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625228   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:45:10.625249   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625262   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625277   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625297   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625391   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625433   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625392   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625866   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625912   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625973   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626017   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.626051   16725 out.go:177] * Verifying Kubernetes components...
	I0914 16:45:10.626257   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626289   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.626630   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626698   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.631422   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:45:10.643737   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0914 16:45:10.644067   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0914 16:45:10.644260   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.643976   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0914 16:45:10.644937   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.644959   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645032   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.645109   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.645308   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.645466   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.645486   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645661   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.645674   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645856   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.645968   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.646318   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.646363   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.646410   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.646443   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.658785   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.658848   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.659642   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.659689   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.668950   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0914 16:45:10.669202   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0914 16:45:10.673147   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.673249   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.674307   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.674330   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.674658   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.674677   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.674857   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.675190   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.675403   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.675458   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.680254   16725 addons.go:234] Setting addon default-storageclass=true in "addons-996992"
	I0914 16:45:10.680332   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.680709   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.680747   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.681169   16725 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-996992"
	I0914 16:45:10.681215   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.681572   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.681620   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.688239   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
	I0914 16:45:10.688935   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.689788   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.689818   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.690304   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.691113   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.691159   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.695403   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0914 16:45:10.695859   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.696143   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0914 16:45:10.697034   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.697057   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.697432   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.698006   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.698052   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.698627   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.699204   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.699227   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.699701   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.699944   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.700002   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0914 16:45:10.700177   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0914 16:45:10.701070   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.701617   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.701642   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.701707   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.702279   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.702857   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.702896   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.703130   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.703659   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.703682   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.704625   16725 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 16:45:10.705330   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.706061   16725 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:10.706078   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 16:45:10.706100   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.706896   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.706941   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.709826   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.710025   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0914 16:45:10.710585   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.710610   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.710663   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.710948   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.711126   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.711257   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.711463   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.712334   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0914 16:45:10.712445   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0914 16:45:10.712635   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.713188   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.713212   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.713557   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.714114   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.714187   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.714670   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.715212   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.715229   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.715594   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.716145   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.716181   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.718969   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.718990   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.719432   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.721094   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0914 16:45:10.721588   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.722010   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.722031   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.723638   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.724834   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0914 16:45:10.724994   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.725170   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.725465   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I0914 16:45:10.727414   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0914 16:45:10.727417   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.727415   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0914 16:45:10.727546   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.727570   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.727636   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:10.727648   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:10.727899   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:10.727912   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.727934   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:10.727946   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:10.727954   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:10.727962   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:10.728003   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.728073   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.728123   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.728189   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:10.728222   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:10.728238   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 16:45:10.728338   16725 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0914 16:45:10.728897   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.728950   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.729209   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I0914 16:45:10.729478   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.729509   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.729637   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.729966   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.729987   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.730120   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.730139   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.730398   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.730596   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.730665   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.731392   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.731538   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.731557   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.731611   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.732178   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.732245   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.732295   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0914 16:45:10.734574   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.734579   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.734688   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.734744   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0914 16:45:10.735001   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.735046   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.735825   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.736192   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.736223   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.736395   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.736576   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.736592   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.736948   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.737179   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.737197   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.737562   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.737591   16725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 16:45:10.737664   16725 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 16:45:10.738728   16725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:10.738746   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 16:45:10.738765   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.739421   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 16:45:10.739440   16725 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 16:45:10.739456   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.742843   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.743195   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.743228   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.743515   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.743739   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.743928   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.744098   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.744454   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0914 16:45:10.744602   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.744871   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.744902   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.745182   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.745420   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.745569   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.745740   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.746637   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.746670   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.746699   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.746715   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.747176   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.748001   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0914 16:45:10.748265   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.748278   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.748857   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.748894   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.749102   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.749338   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.749619   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.750242   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.750258   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.750658   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.751280   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.751315   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.751558   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.753110   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0914 16:45:10.753540   16725 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 16:45:10.753566   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.754094   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.754112   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.754480   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.754671   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.755075   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 16:45:10.755092   16725 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 16:45:10.755111   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.757604   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.758799   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.759063   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 16:45:10.759379   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.759413   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.759591   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.759777   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.759925   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.760043   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.761413   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 16:45:10.764401   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0914 16:45:10.764486   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0914 16:45:10.764653   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 16:45:10.764874   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.765386   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.765410   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.765758   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.765983   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.767246   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 16:45:10.767268   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.768228   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.768265   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.768284   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.768810   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.769040   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.769522   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 16:45:10.769526   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 16:45:10.770047   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0914 16:45:10.770470   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.770948   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.770965   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.771278   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.771438   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.772503   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:10.772561   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 16:45:10.773645   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:10.773685   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 16:45:10.773697   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.774893   16725 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0914 16:45:10.775085   16725 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:10.775109   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 16:45:10.775128   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.775683   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 16:45:10.775853   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36781
	I0914 16:45:10.775979   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0914 16:45:10.776073   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0914 16:45:10.776095   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.776267   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.776399   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0914 16:45:10.776756   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 16:45:10.776773   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 16:45:10.776776   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.776797   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.777646   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.777664   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.778321   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.778341   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.778636   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.779063   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.780072   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.780437   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.780455   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.780479   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.780653   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.780703   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.780834   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.780938   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.781043   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.781324   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.782798   16725 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 16:45:10.784596   16725 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 16:45:10.784747   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.784942   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.785509   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.785544   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.785572   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.785798   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.785836   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.786069   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.786108   16725 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 16:45:10.786123   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 16:45:10.786130   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.786141   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.786311   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.786443   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.786567   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.786865   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0914 16:45:10.786927   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0914 16:45:10.787442   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.787449   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.787928   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.787944   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.788067   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.788078   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.788460   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.788499   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.788727   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.788782   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.789352   16725 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 16:45:10.789703   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.790004   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.790285   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.790558   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.790863   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.790882   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.791031   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.791217   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.791288   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.791539   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.791700   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.791780   16725 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 16:45:10.791796   16725 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 16:45:10.791815   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.793587   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 16:45:10.793606   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.794824   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 16:45:10.794856   16725 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 16:45:10.794874   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.795591   16725 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 16:45:10.796399   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.796850   16725 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:10.796866   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 16:45:10.796869   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.796884   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.796884   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.797475   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.797658   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.797852   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.798050   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.798253   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0914 16:45:10.798969   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.799185   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.799677   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.799700   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.799747   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.799773   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.800030   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.800161   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.800242   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.800507   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.800594   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.800777   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.800785   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
	I0914 16:45:10.800907   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.801232   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.801253   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.801443   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.801712   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.801851   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.801916   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.802030   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.802669   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.803212   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.803239   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.803521   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.803699   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.803742   16725 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 16:45:10.804878   16725 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:10.804895   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 16:45:10.804911   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.805093   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.806643   16725 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 16:45:10.807521   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.807876   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.807899   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.808075   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.808222   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.808318   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.808404   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.808855   16725 out.go:177]   - Using image docker.io/busybox:stable
	I0914 16:45:10.809861   16725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:10.809873   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 16:45:10.809885   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.812131   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0914 16:45:10.812590   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.812888   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.813075   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.813094   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.813367   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.813384   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.813580   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.813714   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.813818   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.813904   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.813982   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.814121   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.815554   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.815750   16725 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:10.815759   16725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 16:45:10.815769   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.819041   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.819420   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.819448   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.819588   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.819749   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.819895   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.820000   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:11.053496   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 16:45:11.053527   16725 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 16:45:11.097975   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 16:45:11.098000   16725 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 16:45:11.124289   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0914 16:45:11.124318   16725 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0914 16:45:11.154793   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 16:45:11.154823   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 16:45:11.167635   16725 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 16:45:11.167664   16725 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 16:45:11.184834   16725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:45:11.184857   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 16:45:11.195055   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:11.210697   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:11.248543   16725 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 16:45:11.248570   16725 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 16:45:11.259633   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:11.260194   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 16:45:11.260211   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 16:45:11.270256   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 16:45:11.270287   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 16:45:11.323366   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:11.328598   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:11.337140   16725 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:11.337159   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 16:45:11.338365   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 16:45:11.338383   16725 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 16:45:11.341295   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:11.348260   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:11.367015   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:45:11.367039   16725 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0914 16:45:11.367119   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 16:45:11.367130   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 16:45:11.373728   16725 node_ready.go:35] waiting up to 6m0s for node "addons-996992" to be "Ready" ...
	I0914 16:45:11.378694   16725 node_ready.go:49] node "addons-996992" has status "Ready":"True"
	I0914 16:45:11.378721   16725 node_ready.go:38] duration metric: took 4.969428ms for node "addons-996992" to be "Ready" ...
	I0914 16:45:11.378733   16725 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:11.384893   16725 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:11.413618   16725 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 16:45:11.413646   16725 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 16:45:11.437356   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 16:45:11.437390   16725 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 16:45:11.454900   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 16:45:11.454926   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 16:45:11.476373   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:45:11.486849   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:11.516082   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:11.516112   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 16:45:11.529228   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 16:45:11.529258   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 16:45:11.532615   16725 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 16:45:11.532647   16725 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 16:45:11.572481   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:11.572521   16725 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 16:45:11.615905   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 16:45:11.615938   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 16:45:11.665213   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:11.685127   16725 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 16:45:11.685162   16725 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 16:45:11.707538   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 16:45:11.707569   16725 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 16:45:11.735433   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:11.795975   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 16:45:11.796003   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 16:45:11.860384   16725 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 16:45:11.860415   16725 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 16:45:11.885579   16725 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:11.885602   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 16:45:11.939398   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 16:45:11.939428   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 16:45:12.071279   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:12.076177   16725 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 16:45:12.076212   16725 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 16:45:12.193047   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 16:45:12.193067   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 16:45:12.350531   16725 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:12.350553   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 16:45:12.571518   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:12.589231   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 16:45:12.589261   16725 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 16:45:12.822425   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 16:45:12.822449   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 16:45:12.981922   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 16:45:12.981946   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 16:45:13.289971   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:13.289994   16725 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 16:45:13.432574   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:13.662491   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:13.691925   16725 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.507036024s)
	I0914 16:45:13.691964   16725 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 16:45:13.983899   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.788807127s)
	I0914 16:45:13.983965   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:13.983978   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:13.984306   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:13.984324   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:13.984333   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:13.984341   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:13.984593   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:13.984610   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:14.263792   16725 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-996992" context rescaled to 1 replicas
	I0914 16:45:15.107060   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.896323905s)
	I0914 16:45:15.107126   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.107142   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.107451   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.107471   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.107471   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.107483   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.107491   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.107708   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.107721   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.448055   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:15.802644   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.542973946s)
	I0914 16:45:15.802658   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.479250603s)
	I0914 16:45:15.802693   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.802710   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.802698   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.802765   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803023   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803044   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803090   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.803101   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.803112   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803052   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803049   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803183   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.803193   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.803200   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803427   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803495   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803536   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803549   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.804919   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.804939   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:17.807492   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 16:45:17.807535   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:17.810372   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:17.810780   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:17.810816   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:17.810957   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:17.811136   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:17.811330   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:17.811482   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:17.922407   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:18.212498   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 16:45:18.361996   16725 addons.go:234] Setting addon gcp-auth=true in "addons-996992"
	I0914 16:45:18.362064   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:18.362615   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:18.362669   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:18.378887   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I0914 16:45:18.379466   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:18.380023   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:18.380052   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:18.380398   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:18.380840   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:18.380878   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:18.397216   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0914 16:45:18.397733   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:18.398249   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:18.398279   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:18.398627   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:18.398815   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:18.400541   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:18.400765   16725 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 16:45:18.400791   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:18.403800   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:18.404197   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:18.404228   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:18.404369   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:18.404558   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:18.404701   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:18.404877   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:19.293405   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.964772062s)
	I0914 16:45:19.293466   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293469   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.952144922s)
	I0914 16:45:19.293515   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293537   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293479   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293535   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.945253778s)
	I0914 16:45:19.293646   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.817244353s)
	I0914 16:45:19.293653   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293667   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293671   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293682   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293679   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.806797648s)
	I0914 16:45:19.293729   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.628484398s)
	I0914 16:45:19.293741   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293749   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293760   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293762   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293784   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.558321173s)
	I0914 16:45:19.293801   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293811   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293887   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.222576723s)
	W0914 16:45:19.293930   16725 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:19.293976   16725 retry.go:31] will retry after 361.189184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:19.294023   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294024   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294035   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294042   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294038   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294048   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294054   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294066   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294075   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294081   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294098   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.722532317s)
	I0914 16:45:19.294126   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294139   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294145   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294181   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294190   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294128   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294211   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294219   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294225   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294243   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294268   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294198   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294284   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294288   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294296   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294304   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294311   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294338   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294352   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294363   16725 addons.go:475] Verifying addon metrics-server=true in "addons-996992"
	I0914 16:45:19.294368   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294386   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294392   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294399   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294405   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294869   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294897   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294903   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294910   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294916   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294965   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294985   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294993   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.295199   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.295218   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.295240   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.295246   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297056   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297087   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297093   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297100   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.297106   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.297194   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297214   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297221   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297458   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297469   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297479   16725 addons.go:475] Verifying addon ingress=true in "addons-996992"
	I0914 16:45:19.297608   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297828   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297852   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297858   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297867   16725 addons.go:475] Verifying addon registry=true in "addons-996992"
	I0914 16:45:19.297564   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297990   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297592   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.298014   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.299712   16725 out.go:177] * Verifying ingress addon...
	I0914 16:45:19.300586   16725 out.go:177] * Verifying registry addon...
	I0914 16:45:19.300595   16725 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-996992 service yakd-dashboard -n yakd-dashboard
	
	I0914 16:45:19.302049   16725 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 16:45:19.302931   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 16:45:19.344991   16725 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 16:45:19.345020   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:19.345383   16725 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 16:45:19.345406   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:19.372208   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.372232   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.372506   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.372522   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 16:45:19.372615   16725 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 16:45:19.383702   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.383730   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.384014   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.384038   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.655329   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:20.045338   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:20.050206   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.055704   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.310964   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.311082   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.682921   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.020377333s)
	I0914 16:45:20.682968   16725 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.282185443s)
	I0914 16:45:20.682969   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:20.682986   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:20.683282   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:20.683301   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:20.683311   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:20.683320   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:20.683581   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:20.683592   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:20.683609   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:20.683625   16725 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-996992"
	I0914 16:45:20.684836   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:20.685652   16725 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 16:45:20.687381   16725 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 16:45:20.688045   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 16:45:20.688683   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 16:45:20.688704   16725 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 16:45:20.699808   16725 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 16:45:20.699830   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:20.760828   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 16:45:20.760854   16725 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 16:45:20.806360   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.808190   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.876308   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:20.876331   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 16:45:20.962823   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:21.194364   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.308241   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.308330   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:21.459476   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.804100826s)
	I0914 16:45:21.459541   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:21.459563   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:21.459818   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:21.459856   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:21.459870   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:21.459878   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:21.460217   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:21.460243   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:21.460259   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:21.692747   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.824936   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.825463   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.037036   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.074172157s)
	I0914 16:45:22.037089   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:22.037108   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:22.037385   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:22.037437   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:22.037456   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:22.037470   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:22.037478   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:22.037812   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:22.037826   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:22.039855   16725 addons.go:475] Verifying addon gcp-auth=true in "addons-996992"
	I0914 16:45:22.041190   16725 out.go:177] * Verifying gcp-auth addon...
	I0914 16:45:22.043315   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 16:45:22.062131   16725 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:45:22.062174   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:22.206114   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.305919   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.307902   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:22.397413   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:22.548345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:22.692725   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.829322   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.829369   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.047052   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:23.193924   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.306209   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.307371   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.547918   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:23.693915   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.806505   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.808215   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.047225   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:24.195089   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.311883   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.312000   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.547845   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:24.693213   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.807438   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.807893   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.892150   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:25.047378   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:25.193183   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.308297   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:25.308656   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.547425   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:25.695489   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.807000   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.807151   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.047297   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:26.192551   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.306770   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.307157   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:26.548995   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:26.692772   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.807385   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.808205   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.052696   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:27.195215   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.307090   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.307252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:27.392113   16725 pod_ready.go:98] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.189 HostIPs:[{IP:192.168.39
.189}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-14 16:45:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:45:14 +0000 UTC,FinishedAt:2024-09-14 16:45:24 +0000 UTC,ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c Started:0xc0029481a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d01430} {Name:kube-api-access-gv6ld MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d01440}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:45:27.392141   16725 pod_ready.go:82] duration metric: took 16.007223581s for pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace to be "Ready" ...
	E0914 16:45:27.392157   16725 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.189 HostIPs:[{IP:192.168.39.189}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-14 16:45:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:45:14 +0000 UTC,FinishedAt:2024-09-14 16:45:24 +0000 UTC,ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c Started:0xc0029481a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d01430} {Name:kube-api-access-gv6ld MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d01440}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:45:27.392172   16725 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:27.547236   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:27.692797   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.805927   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.808529   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.046967   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:28.193365   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.306453   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.306996   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.547515   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:28.692136   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.805564   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.808148   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.047966   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:29.192746   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.306293   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.307762   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.397654   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:29.546652   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:29.692992   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.806654   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.807372   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.048650   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.200286   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.307076   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.307351   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.547222   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.692129   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.806326   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.806696   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.047541   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.193463   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.306316   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.306957   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.400132   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:31.547554   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.691976   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.806039   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.807935   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.046311   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.193223   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.305895   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:32.306116   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.547547   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.693274   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.806864   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.807025   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.046675   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.192788   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.307118   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.307576   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.547264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.691956   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.805950   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.807272   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.898447   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:34.046538   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.193111   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.306594   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:34.306780   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.547534   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.693573   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.806532   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.807796   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.049173   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.193341   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.306957   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.307826   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.547124   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.693884   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.813240   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.813472   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.898771   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:36.046736   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.192647   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.307028   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:36.307153   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:36.550055   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.692268   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.808196   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:36.808552   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.047345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.192191   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.306427   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:37.306615   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.546905   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.693413   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.806415   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:37.806625   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.906344   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:38.047348   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.192226   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.307259   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.308416   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:38.549806   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.693516   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.806779   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.807117   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:39.047166   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.193398   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.305796   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.306965   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:39.546569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.692192   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.807726   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.809337   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:40.047029   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.198177   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.306487   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:40.306759   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.398546   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:40.546426   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.692436   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.807118   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.808125   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.048639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.193023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.306385   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.307022   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:41.546832   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.692299   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.806619   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.807745   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.051127   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.193235   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.306207   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.307023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:42.547148   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.692114   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.807237   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.807551   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:42.898978   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:43.047443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.192717   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.306429   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.307536   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:43.547361   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.692472   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.806328   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.806544   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.047256   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.193079   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.307376   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.307539   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.546600   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.947832   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.948674   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.949499   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.954329   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:45.047207   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.192019   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.307059   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:45.307388   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:45.546442   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.693013   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.807362   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:45.808026   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.049098   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.193102   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.307108   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:46.307421   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.548460   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.692457   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.807661   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:46.807813   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.048241   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.192214   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.306248   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.306671   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:47.398101   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:47.547639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.693105   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.806345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:47.806838   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.898498   16725 pod_ready.go:93] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.898523   16725 pod_ready.go:82] duration metric: took 20.506341334s for pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.898537   16725 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.903604   16725 pod_ready.go:93] pod "etcd-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.903629   16725 pod_ready.go:82] duration metric: took 5.083745ms for pod "etcd-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.903640   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.908397   16725 pod_ready.go:93] pod "kube-apiserver-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.908426   16725 pod_ready.go:82] duration metric: took 4.777526ms for pod "kube-apiserver-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.908439   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.918027   16725 pod_ready.go:93] pod "kube-controller-manager-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.918048   16725 pod_ready.go:82] duration metric: took 9.601319ms for pod "kube-controller-manager-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.918056   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ll2cd" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.923629   16725 pod_ready.go:93] pod "kube-proxy-ll2cd" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.923659   16725 pod_ready.go:82] duration metric: took 5.594635ms for pod "kube-proxy-ll2cd" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.923671   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:48.047579   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.193569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.296378   16725 pod_ready.go:93] pod "kube-scheduler-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:48.296405   16725 pod_ready.go:82] duration metric: took 372.727475ms for pod "kube-scheduler-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:48.296414   16725 pod_ready.go:39] duration metric: took 36.917662966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:48.296429   16725 api_server.go:52] waiting for apiserver process to appear ...
	I0914 16:45:48.296474   16725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:45:48.307319   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:48.308769   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.333952   16725 api_server.go:72] duration metric: took 37.711200096s to wait for apiserver process to appear ...
	I0914 16:45:48.333977   16725 api_server.go:88] waiting for apiserver healthz status ...
	I0914 16:45:48.333995   16725 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0914 16:45:48.338947   16725 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0914 16:45:48.340137   16725 api_server.go:141] control plane version: v1.31.1
	I0914 16:45:48.340167   16725 api_server.go:131] duration metric: took 6.183106ms to wait for apiserver health ...
	I0914 16:45:48.340177   16725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 16:45:48.504689   16725 system_pods.go:59] 18 kube-system pods found
	I0914 16:45:48.504742   16725 system_pods.go:61] "coredns-7c65d6cfc9-9p6z9" [8b60a487-876e-49a1-9a02-ff29269e6cd9] Running
	I0914 16:45:48.504756   16725 system_pods.go:61] "csi-hostpath-attacher-0" [fc163c87-b3c1-44fb-b23a-daf71f2476fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:48.504781   16725 system_pods.go:61] "csi-hostpath-resizer-0" [cb3dc269-4b68-41cc-8dac-f4e4cac02923] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:48.504800   16725 system_pods.go:61] "csi-hostpathplugin-j8fzx" [4c687703-e40a-48df-9dbf-ef6c5b71f2c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:48.504806   16725 system_pods.go:61] "etcd-addons-996992" [51dddf60-7bb8-4d07-b593-4841d49d04c6] Running
	I0914 16:45:48.504812   16725 system_pods.go:61] "kube-apiserver-addons-996992" [df7a9746-e613-42b3-99ae-376c32e5c9c5] Running
	I0914 16:45:48.504818   16725 system_pods.go:61] "kube-controller-manager-addons-996992" [d0f2e301-3365-4b32-8aa6-583d2794b9d1] Running
	I0914 16:45:48.504829   16725 system_pods.go:61] "kube-ingress-dns-minikube" [9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18] Running
	I0914 16:45:48.504835   16725 system_pods.go:61] "kube-proxy-ll2cd" [77c4fbce-cceb-4918-871f-5d17932941f1] Running
	I0914 16:45:48.504840   16725 system_pods.go:61] "kube-scheduler-addons-996992" [e9922ffd-3c61-47c3-a0d0-2063f8e8484d] Running
	I0914 16:45:48.504848   16725 system_pods.go:61] "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:48.504854   16725 system_pods.go:61] "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
	I0914 16:45:48.504866   16725 system_pods.go:61] "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 16:45:48.504876   16725 system_pods.go:61] "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:48.504890   16725 system_pods.go:61] "snapshot-controller-56fcc65765-cc2vz" [4663132f-a286-4aed-8845-8c2fb27ac546] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.504900   16725 system_pods.go:61] "snapshot-controller-56fcc65765-l6fxq" [719471e2-a6ad-4742-92a5-2ca1874e373c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.504906   16725 system_pods.go:61] "storage-provisioner" [042983c1-0076-46d0-8022-ff8afde6de61] Running
	I0914 16:45:48.504920   16725 system_pods.go:61] "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:45:48.504928   16725 system_pods.go:74] duration metric: took 164.743813ms to wait for pod list to return data ...
	I0914 16:45:48.504942   16725 default_sa.go:34] waiting for default service account to be created ...
	I0914 16:45:48.546545   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.692466   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.696319   16725 default_sa.go:45] found service account: "default"
	I0914 16:45:48.696367   16725 default_sa.go:55] duration metric: took 191.418164ms for default service account to be created ...
	I0914 16:45:48.696376   16725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 16:45:48.808682   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:48.808951   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.920544   16725 system_pods.go:86] 18 kube-system pods found
	I0914 16:45:48.920575   16725 system_pods.go:89] "coredns-7c65d6cfc9-9p6z9" [8b60a487-876e-49a1-9a02-ff29269e6cd9] Running
	I0914 16:45:48.920585   16725 system_pods.go:89] "csi-hostpath-attacher-0" [fc163c87-b3c1-44fb-b23a-daf71f2476fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:48.920592   16725 system_pods.go:89] "csi-hostpath-resizer-0" [cb3dc269-4b68-41cc-8dac-f4e4cac02923] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:48.920600   16725 system_pods.go:89] "csi-hostpathplugin-j8fzx" [4c687703-e40a-48df-9dbf-ef6c5b71f2c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:48.920604   16725 system_pods.go:89] "etcd-addons-996992" [51dddf60-7bb8-4d07-b593-4841d49d04c6] Running
	I0914 16:45:48.920608   16725 system_pods.go:89] "kube-apiserver-addons-996992" [df7a9746-e613-42b3-99ae-376c32e5c9c5] Running
	I0914 16:45:48.920612   16725 system_pods.go:89] "kube-controller-manager-addons-996992" [d0f2e301-3365-4b32-8aa6-583d2794b9d1] Running
	I0914 16:45:48.920616   16725 system_pods.go:89] "kube-ingress-dns-minikube" [9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18] Running
	I0914 16:45:48.920619   16725 system_pods.go:89] "kube-proxy-ll2cd" [77c4fbce-cceb-4918-871f-5d17932941f1] Running
	I0914 16:45:48.920623   16725 system_pods.go:89] "kube-scheduler-addons-996992" [e9922ffd-3c61-47c3-a0d0-2063f8e8484d] Running
	I0914 16:45:48.920629   16725 system_pods.go:89] "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:48.920633   16725 system_pods.go:89] "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
	I0914 16:45:48.920640   16725 system_pods.go:89] "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 16:45:48.920645   16725 system_pods.go:89] "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:48.920652   16725 system_pods.go:89] "snapshot-controller-56fcc65765-cc2vz" [4663132f-a286-4aed-8845-8c2fb27ac546] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.920660   16725 system_pods.go:89] "snapshot-controller-56fcc65765-l6fxq" [719471e2-a6ad-4742-92a5-2ca1874e373c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.920664   16725 system_pods.go:89] "storage-provisioner" [042983c1-0076-46d0-8022-ff8afde6de61] Running
	I0914 16:45:48.920669   16725 system_pods.go:89] "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:45:48.920677   16725 system_pods.go:126] duration metric: took 224.295642ms to wait for k8s-apps to be running ...
	I0914 16:45:48.920684   16725 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 16:45:48.920724   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 16:45:48.937847   16725 system_svc.go:56] duration metric: took 17.154195ms WaitForService to wait for kubelet
	I0914 16:45:48.937878   16725 kubeadm.go:582] duration metric: took 38.315130323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:45:48.937899   16725 node_conditions.go:102] verifying NodePressure condition ...
	I0914 16:45:49.048228   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.098325   16725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 16:45:49.098385   16725 node_conditions.go:123] node cpu capacity is 2
	I0914 16:45:49.098398   16725 node_conditions.go:105] duration metric: took 160.494508ms to run NodePressure ...
	I0914 16:45:49.098410   16725 start.go:241] waiting for startup goroutines ...
	I0914 16:45:49.192082   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.306218   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.307323   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:49.547409   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.692860   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.807027   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.813086   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.047555   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.192775   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.306264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.306398   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:50.547544   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.692765   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.806990   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.807136   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.047419   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.192036   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.306859   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:51.307240   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.546636   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.692296   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.807294   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.807691   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:52.046611   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.193349   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.306306   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.307173   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:52.547079   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.691900   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.806428   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.807573   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:53.046699   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.192419   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.306755   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:53.307712   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.552730   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.693022   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.805998   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.807006   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.047063   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.195701   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.308158   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:54.308170   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.547515   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.693931   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.806765   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.807175   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.047742   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.194005   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.306209   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.307788   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:55.546984   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.693279   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.807163   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:55.807663   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.052639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.193934   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.317185   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:56.322650   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.547946   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.692907   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.812014   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:56.812358   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.047127   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.193740   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.307143   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:57.307407   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.547562   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.693212   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.806535   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:57.806710   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.046520   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.197798   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.307070   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:58.307765   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.547433   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.692299   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.806831   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:58.807481   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.046934   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.193174   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.307443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:59.307669   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.548010   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.693092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.807151   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:59.808268   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.047359   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.478614   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:00.479137   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.479508   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.547104   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.692282   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.806824   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:00.807536   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.047697   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.193726   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.307966   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.308014   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:01.547201   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.695313   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.806792   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.807383   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:02.047607   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.192475   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.306347   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:02.306833   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.547377   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.692730   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.807047   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.807463   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:03.047309   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.195015   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.307647   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:03.307817   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.547787   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.692947   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.807157   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.807344   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:04.048006   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.192987   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.318549   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:04.318994   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.547383   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.693036   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.805898   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.807705   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.047059   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:05.193631   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.306513   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.306799   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:05.546629   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:05.692830   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.806493   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.806880   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.046580   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:06.192054   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.306131   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.307575   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:06.547492   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:06.692615   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.806368   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.806725   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.046496   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:07.192627   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.311557   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.311733   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:07.547642   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:07.693080   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.806770   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.807306   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.047553   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:08.193062   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:08.306216   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.306825   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:08.547432   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:08.693198   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:08.806659   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.807567   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:09.046856   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:09.193443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:09.306323   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.308192   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:09.547245   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:09.692407   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:09.807106   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.809300   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.050073   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:10.192821   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:10.307140   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.307386   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:10.547008   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:10.692575   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:10.806819   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.808404   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.047532   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:11.194303   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:11.306378   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.306880   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:11.547761   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:11.692624   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:11.811199   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.811447   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:12.047345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:12.193374   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:12.306143   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.308049   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:12.546681   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:12.693001   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:12.806422   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.806748   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:13.046519   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:13.632563   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:13.632569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:13.633214   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.633245   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:13.692680   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:13.806502   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.808264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:14.047109   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:14.193313   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:14.305768   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.307495   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:14.547099   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:14.693347   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:14.806645   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.807536   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:15.046459   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:15.192401   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:15.307521   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:15.307739   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.548447   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:15.693811   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:15.805918   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.806859   16725 kapi.go:107] duration metric: took 56.503923107s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 16:46:16.046482   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:16.192234   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:16.306338   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:16.547377   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.214224   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.214920   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.218540   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.221430   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.315378   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.551452   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.694597   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.806145   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.046558   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:18.192092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:18.305661   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.547539   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:18.692638   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:18.806657   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.053521   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:19.193880   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:19.311277   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.546622   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:19.693339   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:19.806264   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.046500   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:20.192998   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:20.306067   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.547197   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:20.692597   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:20.807811   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:21.047801   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:21.192778   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:21.306452   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:21.547311   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:21.693049   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:21.827840   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.047273   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:22.192310   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:22.311209   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.838565   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:22.838932   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.839032   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.047177   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:23.193709   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.306794   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:23.547596   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:23.692382   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.807214   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:24.046485   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:24.192341   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:24.307183   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:24.546672   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:24.693935   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:24.810550   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:25.050252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:25.195092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:25.307161   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:25.549697   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:25.697541   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:25.806080   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:26.046708   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:26.192705   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:26.306674   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:26.547507   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:26.693182   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:26.806532   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:27.049050   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:27.196252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:27.308707   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:27.547747   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:27.692965   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:27.807158   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:28.048325   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:28.193153   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:28.306290   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:28.546673   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:28.692592   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:28.806423   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:29.047119   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:29.193334   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:29.306364   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:29.547235   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:29.697436   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:29.807863   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:30.055007   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:30.193621   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:30.306752   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:30.547587   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:30.693117   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:30.806296   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:31.046378   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:31.193611   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:31.306059   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:31.546599   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:31.692393   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:31.806618   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:32.047197   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:32.199989   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:32.658958   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:32.659665   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:32.693594   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:32.813854   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:33.046793   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:33.194323   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:33.306864   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:33.547559   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:33.693855   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:33.808730   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:34.048970   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:34.194651   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:34.307090   16725 kapi.go:107] duration metric: took 1m15.005037262s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 16:46:34.546875   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:34.694388   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:35.083057   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:35.193569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:35.549326   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:35.692860   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:36.047852   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:36.192896   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:36.547520   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:36.693004   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:37.047621   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.192802   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:37.547115   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.707625   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:38.047500   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.192485   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:38.547359   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.692532   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:39.048815   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.192850   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:39.547858   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.693239   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:40.048117   16725 kapi.go:107] duration metric: took 1m18.00480647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 16:46:40.049808   16725 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-996992 cluster.
	I0914 16:46:40.050997   16725 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 16:46:40.052104   16725 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 16:46:40.193221   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:40.693480   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:41.192757   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:41.707864   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:42.193577   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:42.693176   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:43.192560   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.006023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.193094   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.693734   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:45.193109   16725 kapi.go:107] duration metric: took 1m24.505060721s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 16:46:45.194961   16725 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0914 16:46:45.196167   16725 addons.go:510] duration metric: took 1m34.573399474s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0914 16:46:45.196214   16725 start.go:246] waiting for cluster config update ...
	I0914 16:46:45.196250   16725 start.go:255] writing updated cluster config ...
	I0914 16:46:45.196519   16725 ssh_runner.go:195] Run: rm -f paused
	I0914 16:46:45.248928   16725 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 16:46:45.250609   16725 out.go:177] * Done! kubectl is now configured to use "addons-996992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.005208300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726332961005179323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ef1220a-a095-40fa-8d52-e3c7f5298a70 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.005776846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=329c75f0-9309-4276-b281-75b7a844d1ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.005832825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=329c75f0-9309-4276-b281-75b7a844d1ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.006362398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de840614e522a3e09ac2101bfdf233c22694b241d0330ae9b6b380e57712f528,PodSandboxId:482fe6f4bf589ab62482d5cf79482fa58399b1979690163964ed555474c3697a,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1726332903528893715,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb3f93-4cf1-490b-96ad-b97d53f51435,},Annotations:map[string]string{io.kubernetes.container.hash: a6a7e31c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f5c128449d5b555e8175a0ae8a4e90a5dc7e3e94ab57fac96369cd26b152d7,PodSandboxId:a60ea84d6864a79ae3b20da095944410520b21095a613f589e28b7606d32e62b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726332902046816859,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2d63fd01-0720-4db9-8d9f-72f4224779b4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9,PodSandboxId:544f7ca779a956ad3b90666d8695284754fe898ca5666e34c80a680bb6338b4c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726332393453741266,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-hxnf6,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: d7be8055-0e55-4f2c-8b12-4eb662eb1f12,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac43b158f15f6458d8100d369eec4b21fe21270d1da088979f2dec49b7bf6be9,PodSandboxId:b34408bfe46509036c9fd74c816e83c32927e242f8c879cfdd0662de374b7438,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8
ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726332360873788416,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-6w892,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 345e6c36-623a-477e-9c8c-38b577dc887d,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metad
ata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b4273
0d0ba5c470ef249db0273bf7acdbb0cea0427d5e6c484849d04ab46a3,PodSandboxId:e5f45121454c3a27b0095a2b55c991da6c1e65ac2c0d89d9e2f8ba2459376c34,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726332345705459151,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-v9pgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f1896cc-99c7-4c98-8b64-9e40965c553b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4,PodSandboxId:bbe08984edd14a3061e218776d791d5d60bbd70e0559e18a4b490daad6b022eb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726332327114052376,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958
269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=329c75f0-9309-4276-b281-75b7a844d1ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.042998108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b383929-9160-44eb-afcc-e036bcfc2ee4 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.043169549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b383929-9160-44eb-afcc-e036bcfc2ee4 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.044359140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2112fddf-dece-48b6-b064-a12740f9279e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.045419699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726332961045391484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2112fddf-dece-48b6-b064-a12740f9279e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.046034732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08d568f4-0652-4a79-93f1-4122f59ecacc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.046136026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08d568f4-0652-4a79-93f1-4122f59ecacc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.046573261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de840614e522a3e09ac2101bfdf233c22694b241d0330ae9b6b380e57712f528,PodSandboxId:482fe6f4bf589ab62482d5cf79482fa58399b1979690163964ed555474c3697a,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1726332903528893715,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb3f93-4cf1-490b-96ad-b97d53f51435,},Annotations:map[string]string{io.kubernetes.container.hash: a6a7e31c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f5c128449d5b555e8175a0ae8a4e90a5dc7e3e94ab57fac96369cd26b152d7,PodSandboxId:a60ea84d6864a79ae3b20da095944410520b21095a613f589e28b7606d32e62b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726332902046816859,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2d63fd01-0720-4db9-8d9f-72f4224779b4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9,PodSandboxId:544f7ca779a956ad3b90666d8695284754fe898ca5666e34c80a680bb6338b4c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726332393453741266,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-hxnf6,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: d7be8055-0e55-4f2c-8b12-4eb662eb1f12,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac43b158f15f6458d8100d369eec4b21fe21270d1da088979f2dec49b7bf6be9,PodSandboxId:b34408bfe46509036c9fd74c816e83c32927e242f8c879cfdd0662de374b7438,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8
ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726332360873788416,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-6w892,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 345e6c36-623a-477e-9c8c-38b577dc887d,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metad
ata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b4273
0d0ba5c470ef249db0273bf7acdbb0cea0427d5e6c484849d04ab46a3,PodSandboxId:e5f45121454c3a27b0095a2b55c991da6c1e65ac2c0d89d9e2f8ba2459376c34,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726332345705459151,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-v9pgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f1896cc-99c7-4c98-8b64-9e40965c553b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4,PodSandboxId:bbe08984edd14a3061e218776d791d5d60bbd70e0559e18a4b490daad6b022eb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726332327114052376,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958
269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08d568f4-0652-4a79-93f1-4122f59ecacc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.094190322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3af3382f-1d7e-4a69-b414-8a89585d244d name=/runtime.v1.RuntimeService/Version
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.094305102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3af3382f-1d7e-4a69-b414-8a89585d244d name=/runtime.v1.RuntimeService/Version
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.095594588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29a75a9c-15ea-4120-afd1-3b36f60de824 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.096731525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726332961096701948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29a75a9c-15ea-4120-afd1-3b36f60de824 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.097400094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa5bfb83-eb64-40c1-b713-8fc7bf30d90b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.097456765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa5bfb83-eb64-40c1-b713-8fc7bf30d90b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.097816448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de840614e522a3e09ac2101bfdf233c22694b241d0330ae9b6b380e57712f528,PodSandboxId:482fe6f4bf589ab62482d5cf79482fa58399b1979690163964ed555474c3697a,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1726332903528893715,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb3f93-4cf1-490b-96ad-b97d53f51435,},Annotations:map[string]string{io.kubernetes.container.hash: a6a7e31c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f5c128449d5b555e8175a0ae8a4e90a5dc7e3e94ab57fac96369cd26b152d7,PodSandboxId:a60ea84d6864a79ae3b20da095944410520b21095a613f589e28b7606d32e62b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726332902046816859,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2d63fd01-0720-4db9-8d9f-72f4224779b4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9,PodSandboxId:544f7ca779a956ad3b90666d8695284754fe898ca5666e34c80a680bb6338b4c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726332393453741266,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-hxnf6,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: d7be8055-0e55-4f2c-8b12-4eb662eb1f12,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac43b158f15f6458d8100d369eec4b21fe21270d1da088979f2dec49b7bf6be9,PodSandboxId:b34408bfe46509036c9fd74c816e83c32927e242f8c879cfdd0662de374b7438,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8
ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726332360873788416,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-6w892,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 345e6c36-623a-477e-9c8c-38b577dc887d,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metad
ata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b4273
0d0ba5c470ef249db0273bf7acdbb0cea0427d5e6c484849d04ab46a3,PodSandboxId:e5f45121454c3a27b0095a2b55c991da6c1e65ac2c0d89d9e2f8ba2459376c34,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726332345705459151,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-v9pgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f1896cc-99c7-4c98-8b64-9e40965c553b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4,PodSandboxId:bbe08984edd14a3061e218776d791d5d60bbd70e0559e18a4b490daad6b022eb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726332327114052376,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958
269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa5bfb83-eb64-40c1-b713-8fc7bf30d90b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.132072794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=525e8abb-5455-4c10-928c-363fe6676535 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.132190814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=525e8abb-5455-4c10-928c-363fe6676535 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.133318718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cf1d249-b800-47b5-99e8-f4829e3f8432 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.134404980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726332961134379767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557374,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cf1d249-b800-47b5-99e8-f4829e3f8432 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.134973998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6fbb191-aced-4231-87ca-6ff59ff9b6fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.135030931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6fbb191-aced-4231-87ca-6ff59ff9b6fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:56:01 addons-996992 crio[669]: time="2024-09-14 16:56:01.135527957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de840614e522a3e09ac2101bfdf233c22694b241d0330ae9b6b380e57712f528,PodSandboxId:482fe6f4bf589ab62482d5cf79482fa58399b1979690163964ed555474c3697a,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1726332903528893715,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdb3f93-4cf1-490b-96ad-b97d53f51435,},Annotations:map[string]string{io.kubernetes.container.hash: a6a7e31c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f5c128449d5b555e8175a0ae8a4e90a5dc7e3e94ab57fac96369cd26b152d7,PodSandboxId:a60ea84d6864a79ae3b20da095944410520b21095a613f589e28b7606d32e62b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726332902046816859,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2d63fd01-0720-4db9-8d9f-72f4224779b4,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9,PodSandboxId:544f7ca779a956ad3b90666d8695284754fe898ca5666e34c80a680bb6338b4c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726332393453741266,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-hxnf6,io.kubernetes.pod.namespace: ingress-ngin
x,io.kubernetes.pod.uid: d7be8055-0e55-4f2c-8b12-4eb662eb1f12,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fada
ef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0
e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac43b158f15f6458d8100d369eec4b21fe21270d1da088979f2dec49b7bf6be9,PodSandboxId:b34408bfe46509036c9fd74c816e83c32927e242f8c879cfdd0662de374b7438,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8
ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1726332360873788416,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-67d98fc6b-6w892,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 345e6c36-623a-477e-9c8c-38b577dc887d,},Annotations:map[string]string{io.kubernetes.container.hash: e656c288,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metad
ata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50b4273
0d0ba5c470ef249db0273bf7acdbb0cea0427d5e6c484849d04ab46a3,PodSandboxId:e5f45121454c3a27b0095a2b55c991da6c1e65ac2c0d89d9e2f8ba2459376c34,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1726332345705459151,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-v9pgt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f1896cc-99c7-4c98-8b64-9e40965c553b,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4,PodSandboxId:bbe08984edd14a3061e218776d791d5d60bbd70e0559e18a4b490daad6b022eb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726332327114052376,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958
269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernete
s.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6fbb191-aced-4231-87ca-6ff59ff9b6fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	a2c842e27b9de       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              10 seconds ago      Running             nginx                      0                   8164a72938eec       nginx
	de840614e522a       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                57 seconds ago      Exited              helm-test                  0                   482fe6f4bf589       helm-test
	a5f5c128449d5       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             59 seconds ago      Exited              helper-pod                 0                   a60ea84d6864a       helper-pod-delete-pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a
	b1fc29dced5ee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                   0                   5ac3aa6b762ea       gcp-auth-89d5ffd79-smf6s
	dc19f66cd0016       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                 0                   544f7ca779a95       ingress-nginx-controller-bc57996ff-hxnf6
	22ac3510c9f6c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                      0                   8895260d1ecf9       ingress-nginx-admission-patch-8zsm9
	e14ff6cfbdcb2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                     0                   d3b092d9b1ce5       ingress-nginx-admission-create-5rv5k
	ac43b158f15f6       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              10 minutes ago      Running             yakd                       0                   b34408bfe4650       yakd-dashboard-67d98fc6b-6w892
	e8c78f14b17e7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server             0                   1aa3f3cb51004       metrics-server-84c5f94fbc-zpthv
	50b42730d0ba5       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   e5f45121454c3       nvidia-device-plugin-daemonset-v9pgt
	4617d458fc0fa       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   bbe08984edd14       kube-ingress-dns-minikube
	7f90cf12b4313       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner        0                   5527e3f395706       storage-provisioner
	b39fe7c77bdab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                    0                   1c0a11c1d7f7c       coredns-7c65d6cfc9-9p6z9
	7636b49f23d35       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                 0                   816f86f6b29ab       kube-proxy-ll2cd
	62ccf13035320       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler             0                   ce41d60ed0525       kube-scheduler-addons-996992
	9e180103456d1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver             0                   25abc346c2516       kube-apiserver-addons-996992
	244c994b666b9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                       0                   15fa01d2627fb       etcd-addons-996992
	b6da48572a3f2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager    0                   476c6d8937274       kube-controller-manager-addons-996992
	
	
	==> coredns [b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f] <==
	[INFO] 127.0.0.1:41202 - 28347 "HINFO IN 1673696776001178715.7846265792048933670. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013145705s
	[INFO] 10.244.0.6:33528 - 34854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000861082s
	[INFO] 10.244.0.6:33528 - 56874 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000509102s
	[INFO] 10.244.0.6:49882 - 44252 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179055s
	[INFO] 10.244.0.6:49882 - 26330 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086967s
	[INFO] 10.244.0.6:56229 - 8877 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096878s
	[INFO] 10.244.0.6:56229 - 29867 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094082s
	[INFO] 10.244.0.6:60530 - 59893 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128321s
	[INFO] 10.244.0.6:60530 - 13042 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157038s
	[INFO] 10.244.0.6:59365 - 64212 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145076s
	[INFO] 10.244.0.6:59365 - 23496 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000053277s
	[INFO] 10.244.0.6:38693 - 47172 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089079s
	[INFO] 10.244.0.6:38693 - 34881 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000266922s
	[INFO] 10.244.0.6:57815 - 40259 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061127s
	[INFO] 10.244.0.6:57815 - 21061 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054151s
	[INFO] 10.244.0.6:54487 - 49983 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049761s
	[INFO] 10.244.0.6:54487 - 43833 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105815s
	[INFO] 10.244.0.22:49719 - 23493 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000476893s
	[INFO] 10.244.0.22:58157 - 28044 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000101631s
	[INFO] 10.244.0.22:49755 - 34273 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139903s
	[INFO] 10.244.0.22:34695 - 62237 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115272s
	[INFO] 10.244.0.22:38487 - 8705 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122294s
	[INFO] 10.244.0.22:34286 - 15998 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008471s
	[INFO] 10.244.0.22:36588 - 36023 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002660038s
	[INFO] 10.244.0.22:43999 - 38790 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000715506s
	
	
	==> describe nodes <==
	Name:               addons-996992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-996992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=addons-996992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T16_45_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-996992
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:45:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-996992
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 16:55:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 16:55:38 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 16:55:38 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 16:55:38 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 16:55:38 +0000   Sat, 14 Sep 2024 16:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    addons-996992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e2b58bc38a04bd6877d6321c8c25636
	  System UUID:                5e2b58bc-38a0-4bd6-877d-6321c8c25636
	  Boot ID:                    bc515e37-5984-41bc-90ff-4a341c7992e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  gcp-auth                    gcp-auth-89d5ffd79-smf6s                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-hxnf6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-9p6z9                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-996992                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-996992                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-996992       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-ll2cd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-996992                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-zpthv             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-v9pgt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-6w892              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-996992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-996992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-996992 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-996992 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-996992 event: Registered Node addons-996992 in Controller
	
	
	==> dmesg <==
	[  +5.326354] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.146373] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.018894] kauditd_printk_skb: 96 callbacks suppressed
	[  +5.067659] kauditd_printk_skb: 163 callbacks suppressed
	[  +6.057467] kauditd_printk_skb: 65 callbacks suppressed
	[ +26.543298] kauditd_printk_skb: 4 callbacks suppressed
	[Sep14 16:46] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.726173] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.858248] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.366113] kauditd_printk_skb: 49 callbacks suppressed
	[  +7.648867] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.829438] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.753456] kauditd_printk_skb: 16 callbacks suppressed
	[Sep14 16:47] kauditd_printk_skb: 40 callbacks suppressed
	[Sep14 16:48] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:49] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:54] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.088825] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.292544] kauditd_printk_skb: 15 callbacks suppressed
	[Sep14 16:55] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.127400] kauditd_printk_skb: 12 callbacks suppressed
	[ +26.498820] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.490747] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.865254] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309] <==
	{"level":"warn","ts":"2024-09-14T16:46:32.640655Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:32.248295Z","time spent":"392.243784ms","remote":"127.0.0.1:39862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-h8s8m.17f52a2909e15faf\" mod_revision:1097 > success:<request_put:<key:\"/registry/events/gadget/gadget-h8s8m.17f52a2909e15faf\" value_size:693 lease:394788383440565305 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-h8s8m.17f52a2909e15faf\" > >"}
	{"level":"info","ts":"2024-09-14T16:46:32.640798Z","caller":"traceutil/trace.go:171","msg":"trace[441675241] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1136; }","duration":"349.386504ms","start":"2024-09-14T16:46:32.291358Z","end":"2024-09-14T16:46:32.640744Z","steps":["trace[441675241] 'read index received'  (duration: 349.380192ms)","trace[441675241] 'applied index is now lower than readState.Index'  (duration: 5.398µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T16:46:32.641033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.667066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:46:32.641978Z","caller":"traceutil/trace.go:171","msg":"trace[1819571857] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1100; }","duration":"350.610857ms","start":"2024-09-14T16:46:32.291353Z","end":"2024-09-14T16:46:32.641964Z","steps":["trace[1819571857] 'agreement among raft nodes before linearized reading'  (duration: 349.591549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:32.642026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:32.291311Z","time spent":"350.697996ms","remote":"127.0.0.1:39968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-14T16:46:32.642952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.614889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:46:32.643050Z","caller":"traceutil/trace.go:171","msg":"trace[537034687] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1100; }","duration":"110.720841ms","start":"2024-09-14T16:46:32.532320Z","end":"2024-09-14T16:46:32.643041Z","steps":["trace[537034687] 'agreement among raft nodes before linearized reading'  (duration: 110.540885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:32.643209Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.253854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:46:32.643434Z","caller":"traceutil/trace.go:171","msg":"trace[994433346] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1100; }","duration":"334.684033ms","start":"2024-09-14T16:46:32.308725Z","end":"2024-09-14T16:46:32.643410Z","steps":["trace[994433346] 'agreement among raft nodes before linearized reading'  (duration: 333.442918ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:32.643688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:32.308692Z","time spent":"334.981095ms","remote":"127.0.0.1:39776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-14T16:46:43.987795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.329366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:46:43.987957Z","caller":"traceutil/trace.go:171","msg":"trace[82175678] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1170; }","duration":"310.494191ms","start":"2024-09-14T16:46:43.677445Z","end":"2024-09-14T16:46:43.987939Z","steps":["trace[82175678] 'range keys from in-memory index tree'  (duration: 310.27123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:43.988032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:43.677409Z","time spent":"310.610725ms","remote":"127.0.0.1:39968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-14T16:46:43.988534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.452534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-14T16:46:43.988944Z","caller":"traceutil/trace.go:171","msg":"trace[925455638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1170; }","duration":"100.863861ms","start":"2024-09-14T16:46:43.888062Z","end":"2024-09-14T16:46:43.988926Z","steps":["trace[925455638] 'range keys from in-memory index tree'  (duration: 100.282057ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:01.401239Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1531}
	{"level":"info","ts":"2024-09-14T16:55:01.453343Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1531,"took":"51.605059ms","hash":2194584676,"current-db-size-bytes":6504448,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3567616,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-14T16:55:01.453889Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2194584676,"revision":1531,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T16:55:06.791539Z","caller":"traceutil/trace.go:171","msg":"trace[1480773213] linearizableReadLoop","detail":"{readStateIndex:2215; appliedIndex:2214; }","duration":"121.517894ms","start":"2024-09-14T16:55:06.669994Z","end":"2024-09-14T16:55:06.791512Z","steps":["trace[1480773213] 'read index received'  (duration: 121.338219ms)","trace[1480773213] 'applied index is now lower than readState.Index'  (duration: 179.198µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T16:55:06.791772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.728846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:5015"}
	{"level":"info","ts":"2024-09-14T16:55:06.791805Z","caller":"traceutil/trace.go:171","msg":"trace[783623875] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:2062; }","duration":"121.808688ms","start":"2024-09-14T16:55:06.669990Z","end":"2024-09-14T16:55:06.791799Z","steps":["trace[783623875] 'agreement among raft nodes before linearized reading'  (duration: 121.605717ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:06.792045Z","caller":"traceutil/trace.go:171","msg":"trace[1054613937] transaction","detail":"{read_only:false; response_revision:2062; number_of_response:1; }","duration":"147.610829ms","start":"2024-09-14T16:55:06.644423Z","end":"2024-09-14T16:55:06.792034Z","steps":["trace[1054613937] 'process raft request'  (duration: 146.958073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:55:10.248913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.394646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:55:10.249062Z","caller":"traceutil/trace.go:171","msg":"trace[2140105305] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2098; }","duration":"171.569728ms","start":"2024-09-14T16:55:10.077481Z","end":"2024-09-14T16:55:10.249051Z","steps":["trace[2140105305] 'agreement among raft nodes before linearized reading'  (duration: 171.37003ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:10.248791Z","caller":"traceutil/trace.go:171","msg":"trace[1932753122] linearizableReadLoop","detail":"{readStateIndex:2253; appliedIndex:2252; }","duration":"171.2342ms","start":"2024-09-14T16:55:10.077485Z","end":"2024-09-14T16:55:10.248719Z","steps":["trace[1932753122] 'read index received'  (duration: 72.664081ms)","trace[1932753122] 'applied index is now lower than readState.Index'  (duration: 98.569685ms)"],"step_count":2}
	
	
	==> gcp-auth [b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188] <==
	2024/09/14 16:46:39 GCP Auth Webhook started!
	2024/09/14 16:46:45 Ready to marshal response ...
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:46:45 Ready to marshal response ...
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:46:45 Ready to marshal response ...
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:54:48 Ready to marshal response ...
	2024/09/14 16:54:48 Ready to write response ...
	2024/09/14 16:54:48 Ready to marshal response ...
	2024/09/14 16:54:48 Ready to write response ...
	2024/09/14 16:54:58 Ready to marshal response ...
	2024/09/14 16:54:58 Ready to write response ...
	2024/09/14 16:54:59 Ready to marshal response ...
	2024/09/14 16:54:59 Ready to write response ...
	2024/09/14 16:55:00 Ready to marshal response ...
	2024/09/14 16:55:00 Ready to write response ...
	2024/09/14 16:55:02 Ready to marshal response ...
	2024/09/14 16:55:02 Ready to write response ...
	2024/09/14 16:55:27 Ready to marshal response ...
	2024/09/14 16:55:27 Ready to write response ...
	2024/09/14 16:55:45 Ready to marshal response ...
	2024/09/14 16:55:45 Ready to write response ...
	
	
	==> kernel <==
	 16:56:01 up 11 min,  0 users,  load average: 1.13, 0.82, 0.58
	Linux addons-996992 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0914 16:47:05.677339       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.47.80:443: connect: connection refused" logger="UnhandledError"
	E0914 16:47:05.687323       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.47.80:443: connect: connection refused" logger="UnhandledError"
	I0914 16:47:05.827073       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 16:55:17.021558       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0914 16:55:17.639974       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:45.238956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.239007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.263430       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.263481       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.291930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.291979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.299265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.299310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.371396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.371502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.847374       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0914 16:55:46.052267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.33.252"}
	W0914 16:55:46.300335       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 16:55:46.379369       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 16:55:46.416041       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:51.311005       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0914 16:55:52.345185       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5] <==
	E0914 16:55:47.909712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:49.420699       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:49.420752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:55:49.744331       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0914 16:55:49.749627       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:49.749662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:50.079322       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:50.079375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0914 16:55:52.346666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:53.570235       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:53.570273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:53.876387       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:53.876529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:54.042565       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:54.042625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:55.547913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:55.547952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:56.690040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:56.690226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:55:59.993858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.064µs"
	W0914 16:56:00.847328       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:00.847381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:01.050567       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:01.050618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:56:01.431014       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	
	
	==> kube-proxy [7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 16:45:15.590221       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 16:45:15.599785       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	E0914 16:45:15.599893       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 16:45:15.658278       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 16:45:15.658320       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 16:45:15.658346       1 server_linux.go:169] "Using iptables Proxier"
	I0914 16:45:15.663334       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 16:45:15.663614       1 server.go:483] "Version info" version="v1.31.1"
	I0914 16:45:15.663626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 16:45:15.666732       1 config.go:199] "Starting service config controller"
	I0914 16:45:15.666758       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 16:45:15.666776       1 config.go:105] "Starting endpoint slice config controller"
	I0914 16:45:15.666780       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 16:45:15.667288       1 config.go:328] "Starting node config controller"
	I0914 16:45:15.667296       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 16:45:15.768165       1 shared_informer.go:320] Caches are synced for node config
	I0914 16:45:15.768221       1 shared_informer.go:320] Caches are synced for service config
	I0914 16:45:15.768261       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d] <==
	W0914 16:45:03.820736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 16:45:03.820857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.832104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 16:45:03.832138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.843716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:03.843762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.866418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 16:45:03.866491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.875513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 16:45:03.875608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.916659       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 16:45:03.917144       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 16:45:03.954059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 16:45:03.954146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.032670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:04.032716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.080506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 16:45:04.080598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.114758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 16:45:04.115807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.126730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 16:45:04.126899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.178995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:04.179383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0914 16:45:06.562975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 16:55:57 addons-996992 kubelet[1212]: E0914 16:55:57.607455    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9262e4af-385c-4c58-a62e-b55a378ea465"
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.533749    1212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b4fce6b94abb94b20879864fbeb8396863c654a964164f59d2b2994d81dda54"
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.588996    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbjlf\" (UniqueName: \"kubernetes.io/projected/1cdb6f95-7b79-43af-9d66-f22ca111afb4-kube-api-access-qbjlf\") pod \"1cdb6f95-7b79-43af-9d66-f22ca111afb4\" (UID: \"1cdb6f95-7b79-43af-9d66-f22ca111afb4\") "
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.589305    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1cdb6f95-7b79-43af-9d66-f22ca111afb4-gcp-creds\") pod \"1cdb6f95-7b79-43af-9d66-f22ca111afb4\" (UID: \"1cdb6f95-7b79-43af-9d66-f22ca111afb4\") "
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.589408    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cdb6f95-7b79-43af-9d66-f22ca111afb4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1cdb6f95-7b79-43af-9d66-f22ca111afb4" (UID: "1cdb6f95-7b79-43af-9d66-f22ca111afb4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.596509    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cdb6f95-7b79-43af-9d66-f22ca111afb4-kube-api-access-qbjlf" (OuterVolumeSpecName: "kube-api-access-qbjlf") pod "1cdb6f95-7b79-43af-9d66-f22ca111afb4" (UID: "1cdb6f95-7b79-43af-9d66-f22ca111afb4"). InnerVolumeSpecName "kube-api-access-qbjlf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.689792    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qbjlf\" (UniqueName: \"kubernetes.io/projected/1cdb6f95-7b79-43af-9d66-f22ca111afb4-kube-api-access-qbjlf\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:55:59 addons-996992 kubelet[1212]: I0914 16:55:59.689826    1212 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1cdb6f95-7b79-43af-9d66-f22ca111afb4-gcp-creds\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.394809    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqz64\" (UniqueName: \"kubernetes.io/projected/1fa84874-319a-4e4a-9126-b618e477b31e-kube-api-access-zqz64\") pod \"1fa84874-319a-4e4a-9126-b618e477b31e\" (UID: \"1fa84874-319a-4e4a-9126-b618e477b31e\") "
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.397206    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa84874-319a-4e4a-9126-b618e477b31e-kube-api-access-zqz64" (OuterVolumeSpecName: "kube-api-access-zqz64") pod "1fa84874-319a-4e4a-9126-b618e477b31e" (UID: "1fa84874-319a-4e4a-9126-b618e477b31e"). InnerVolumeSpecName "kube-api-access-zqz64". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.496158    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bmbm\" (UniqueName: \"kubernetes.io/projected/44b082a1-dd9e-4251-a141-6f0578d54a17-kube-api-access-6bmbm\") pod \"44b082a1-dd9e-4251-a141-6f0578d54a17\" (UID: \"44b082a1-dd9e-4251-a141-6f0578d54a17\") "
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.496238    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zqz64\" (UniqueName: \"kubernetes.io/projected/1fa84874-319a-4e4a-9126-b618e477b31e-kube-api-access-zqz64\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.501140    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b082a1-dd9e-4251-a141-6f0578d54a17-kube-api-access-6bmbm" (OuterVolumeSpecName: "kube-api-access-6bmbm") pod "44b082a1-dd9e-4251-a141-6f0578d54a17" (UID: "44b082a1-dd9e-4251-a141-6f0578d54a17"). InnerVolumeSpecName "kube-api-access-6bmbm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.547306    1212 scope.go:117] "RemoveContainer" containerID="d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.596784    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6bmbm\" (UniqueName: \"kubernetes.io/projected/44b082a1-dd9e-4251-a141-6f0578d54a17-kube-api-access-6bmbm\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.600812    1212 scope.go:117] "RemoveContainer" containerID="d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: E0914 16:56:00.603922    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76\": container with ID starting with d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76 not found: ID does not exist" containerID="d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.604047    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76"} err="failed to get container status \"d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76\": rpc error: code = NotFound desc = could not find container \"d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76\": container with ID starting with d3d907dff82095b80fe1611a6a555ffea5e3b195cd2818a749af80226711fe76 not found: ID does not exist"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.604164    1212 scope.go:117] "RemoveContainer" containerID="d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.641710    1212 scope.go:117] "RemoveContainer" containerID="d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: E0914 16:56:00.642501    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a\": container with ID starting with d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a not found: ID does not exist" containerID="d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a"
	Sep 14 16:56:00 addons-996992 kubelet[1212]: I0914 16:56:00.642562    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a"} err="failed to get container status \"d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a\": rpc error: code = NotFound desc = could not find container \"d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a\": container with ID starting with d905a76fde881f803097559170b2a56dc6eec8d9e86bfb58a71c1863e778240a not found: ID does not exist"
	Sep 14 16:56:01 addons-996992 kubelet[1212]: I0914 16:56:01.611262    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cdb6f95-7b79-43af-9d66-f22ca111afb4" path="/var/lib/kubelet/pods/1cdb6f95-7b79-43af-9d66-f22ca111afb4/volumes"
	Sep 14 16:56:01 addons-996992 kubelet[1212]: I0914 16:56:01.611541    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa84874-319a-4e4a-9126-b618e477b31e" path="/var/lib/kubelet/pods/1fa84874-319a-4e4a-9126-b618e477b31e/volumes"
	Sep 14 16:56:01 addons-996992 kubelet[1212]: I0914 16:56:01.611886    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b082a1-dd9e-4251-a141-6f0578d54a17" path="/var/lib/kubelet/pods/44b082a1-dd9e-4251-a141-6f0578d54a17/volumes"
	
	
	==> storage-provisioner [7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a] <==
	I0914 16:45:18.537690       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 16:45:18.556796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 16:45:18.556868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 16:45:18.586989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 16:45:18.587718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89c4a434-eabc-4a8a-9f14-9375f68755f8", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d became leader
	I0914 16:45:18.587761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d!
	I0914 16:45:18.789501       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-996992 -n addons-996992
helpers_test.go:261: (dbg) Run:  kubectl --context addons-996992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-5rv5k ingress-nginx-admission-patch-8zsm9
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-996992 describe pod busybox ingress-nginx-admission-create-5rv5k ingress-nginx-admission-patch-8zsm9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-996992 describe pod busybox ingress-nginx-admission-create-5rv5k ingress-nginx-admission-patch-8zsm9: exit status 1 (80.873693ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-996992/192.168.39.189
	Start Time:       Sat, 14 Sep 2024 16:46:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtsq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6dtsq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-996992
	  Normal   Pulling    7m43s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5rv5k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8zsm9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-996992 describe pod busybox ingress-nginx-admission-create-5rv5k ingress-nginx-admission-patch-8zsm9: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (154.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-996992 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-996992 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-996992 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e9aa988e-e59a-44dd-84f7-753b4db11866] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e9aa988e-e59a-44dd-84f7-753b4db11866] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004512903s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-996992 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.190745791s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-996992 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.189
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable ingress-dns --alsologtostderr -v=1: (1.426016211s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable ingress --alsologtostderr -v=1: (7.669785503s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-996992 -n addons-996992
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 logs -n 25: (1.22359435s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-357716                                                                     | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-119677                                                                     | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-357716                                                                     | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-539617 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-539617                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35769                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-539617                                                                     | binary-mirror-539617 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-996992 --wait=true                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:54 UTC | 14 Sep 24 16:54 UTC |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-996992 ssh cat                                                                       | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | /opt/local-path-provisioner/pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-996992 ssh curl -s                                                                   | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-996992 ip                                                                            | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | -p addons-996992                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | -p addons-996992                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-996992 ip                                                                            | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:58 UTC | 14 Sep 24 16:58 UTC |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:58 UTC | 14 Sep 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:58 UTC | 14 Sep 24 16:58 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:44:27.658554   16725 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:44:27.659049   16725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:27.659100   16725 out.go:358] Setting ErrFile to fd 2...
	I0914 16:44:27.659118   16725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:27.659608   16725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 16:44:27.660666   16725 out.go:352] Setting JSON to false
	I0914 16:44:27.661546   16725 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1612,"bootTime":1726330656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:44:27.661646   16725 start.go:139] virtualization: kvm guest
	I0914 16:44:27.663699   16725 out.go:177] * [addons-996992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 16:44:27.665028   16725 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 16:44:27.665051   16725 notify.go:220] Checking for updates...
	I0914 16:44:27.667815   16725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:44:27.669277   16725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:44:27.670590   16725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:27.671878   16725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 16:44:27.673058   16725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 16:44:27.674650   16725 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:44:27.706805   16725 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 16:44:27.708321   16725 start.go:297] selected driver: kvm2
	I0914 16:44:27.708336   16725 start.go:901] validating driver "kvm2" against <nil>
	I0914 16:44:27.708348   16725 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 16:44:27.709072   16725 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:27.709158   16725 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 16:44:27.723953   16725 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 16:44:27.724008   16725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:44:27.724241   16725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:44:27.724270   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:44:27.724306   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:27.724316   16725 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:44:27.724367   16725 start.go:340] cluster config:
	{Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:27.724463   16725 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:27.726351   16725 out.go:177] * Starting "addons-996992" primary control-plane node in "addons-996992" cluster
	I0914 16:44:27.727435   16725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:27.727477   16725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 16:44:27.727486   16725 cache.go:56] Caching tarball of preloaded images
	I0914 16:44:27.727583   16725 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 16:44:27.727595   16725 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 16:44:27.727895   16725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json ...
	I0914 16:44:27.727914   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json: {Name:mk5b5d945e87f410628fe80d3ffbea824c8cc516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:27.728052   16725 start.go:360] acquireMachinesLock for addons-996992: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 16:44:27.728097   16725 start.go:364] duration metric: took 32.087µs to acquireMachinesLock for "addons-996992"
	I0914 16:44:27.728117   16725 start.go:93] Provisioning new machine with config: &{Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 16:44:27.728170   16725 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 16:44:27.730533   16725 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 16:44:27.730741   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:44:27.730798   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:44:27.745035   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0914 16:44:27.745492   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:44:27.746094   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:44:27.746115   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:44:27.746439   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:44:27.746641   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:27.746794   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:27.746933   16725 start.go:159] libmachine.API.Create for "addons-996992" (driver="kvm2")
	I0914 16:44:27.746958   16725 client.go:168] LocalClient.Create starting
	I0914 16:44:27.746993   16725 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 16:44:27.859328   16725 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 16:44:27.966294   16725 main.go:141] libmachine: Running pre-create checks...
	I0914 16:44:27.966316   16725 main.go:141] libmachine: (addons-996992) Calling .PreCreateCheck
	I0914 16:44:27.966771   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:27.967192   16725 main.go:141] libmachine: Creating machine...
	I0914 16:44:27.967205   16725 main.go:141] libmachine: (addons-996992) Calling .Create
	I0914 16:44:27.967357   16725 main.go:141] libmachine: (addons-996992) Creating KVM machine...
	I0914 16:44:27.968635   16725 main.go:141] libmachine: (addons-996992) DBG | found existing default KVM network
	I0914 16:44:27.969364   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:27.969186   16746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0914 16:44:27.969389   16725 main.go:141] libmachine: (addons-996992) DBG | created network xml: 
	I0914 16:44:27.969403   16725 main.go:141] libmachine: (addons-996992) DBG | <network>
	I0914 16:44:27.969414   16725 main.go:141] libmachine: (addons-996992) DBG |   <name>mk-addons-996992</name>
	I0914 16:44:27.969476   16725 main.go:141] libmachine: (addons-996992) DBG |   <dns enable='no'/>
	I0914 16:44:27.969509   16725 main.go:141] libmachine: (addons-996992) DBG |   
	I0914 16:44:27.969524   16725 main.go:141] libmachine: (addons-996992) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0914 16:44:27.969537   16725 main.go:141] libmachine: (addons-996992) DBG |     <dhcp>
	I0914 16:44:27.969546   16725 main.go:141] libmachine: (addons-996992) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0914 16:44:27.969553   16725 main.go:141] libmachine: (addons-996992) DBG |     </dhcp>
	I0914 16:44:27.969560   16725 main.go:141] libmachine: (addons-996992) DBG |   </ip>
	I0914 16:44:27.969567   16725 main.go:141] libmachine: (addons-996992) DBG |   
	I0914 16:44:27.969572   16725 main.go:141] libmachine: (addons-996992) DBG | </network>
	I0914 16:44:27.969578   16725 main.go:141] libmachine: (addons-996992) DBG | 
	I0914 16:44:27.975466   16725 main.go:141] libmachine: (addons-996992) DBG | trying to create private KVM network mk-addons-996992 192.168.39.0/24...
	I0914 16:44:28.040012   16725 main.go:141] libmachine: (addons-996992) DBG | private KVM network mk-addons-996992 192.168.39.0/24 created
	I0914 16:44:28.040038   16725 main.go:141] libmachine: (addons-996992) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 ...
	I0914 16:44:28.040051   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.039977   16746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:28.040070   16725 main.go:141] libmachine: (addons-996992) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 16:44:28.040122   16725 main.go:141] libmachine: (addons-996992) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 16:44:28.289089   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.288934   16746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa...
	I0914 16:44:28.557850   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.557726   16746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/addons-996992.rawdisk...
	I0914 16:44:28.557884   16725 main.go:141] libmachine: (addons-996992) DBG | Writing magic tar header
	I0914 16:44:28.557899   16725 main.go:141] libmachine: (addons-996992) DBG | Writing SSH key tar header
	I0914 16:44:28.557913   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.557851   16746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 ...
	I0914 16:44:28.557943   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992
	I0914 16:44:28.557987   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 (perms=drwx------)
	I0914 16:44:28.558007   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 16:44:28.558018   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 16:44:28.558031   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:28.558047   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 16:44:28.558057   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 16:44:28.558068   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 16:44:28.558078   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 16:44:28.558086   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 16:44:28.558098   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins
	I0914 16:44:28.558109   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home
	I0914 16:44:28.558118   16725 main.go:141] libmachine: (addons-996992) DBG | Skipping /home - not owner
	I0914 16:44:28.558148   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 16:44:28.558185   16725 main.go:141] libmachine: (addons-996992) Creating domain...
	I0914 16:44:28.559360   16725 main.go:141] libmachine: (addons-996992) define libvirt domain using xml: 
	I0914 16:44:28.559383   16725 main.go:141] libmachine: (addons-996992) <domain type='kvm'>
	I0914 16:44:28.559393   16725 main.go:141] libmachine: (addons-996992)   <name>addons-996992</name>
	I0914 16:44:28.559399   16725 main.go:141] libmachine: (addons-996992)   <memory unit='MiB'>4000</memory>
	I0914 16:44:28.559405   16725 main.go:141] libmachine: (addons-996992)   <vcpu>2</vcpu>
	I0914 16:44:28.559409   16725 main.go:141] libmachine: (addons-996992)   <features>
	I0914 16:44:28.559414   16725 main.go:141] libmachine: (addons-996992)     <acpi/>
	I0914 16:44:28.559420   16725 main.go:141] libmachine: (addons-996992)     <apic/>
	I0914 16:44:28.559425   16725 main.go:141] libmachine: (addons-996992)     <pae/>
	I0914 16:44:28.559431   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559437   16725 main.go:141] libmachine: (addons-996992)   </features>
	I0914 16:44:28.559443   16725 main.go:141] libmachine: (addons-996992)   <cpu mode='host-passthrough'>
	I0914 16:44:28.559448   16725 main.go:141] libmachine: (addons-996992)   
	I0914 16:44:28.559462   16725 main.go:141] libmachine: (addons-996992)   </cpu>
	I0914 16:44:28.559469   16725 main.go:141] libmachine: (addons-996992)   <os>
	I0914 16:44:28.559475   16725 main.go:141] libmachine: (addons-996992)     <type>hvm</type>
	I0914 16:44:28.559489   16725 main.go:141] libmachine: (addons-996992)     <boot dev='cdrom'/>
	I0914 16:44:28.559500   16725 main.go:141] libmachine: (addons-996992)     <boot dev='hd'/>
	I0914 16:44:28.559505   16725 main.go:141] libmachine: (addons-996992)     <bootmenu enable='no'/>
	I0914 16:44:28.559525   16725 main.go:141] libmachine: (addons-996992)   </os>
	I0914 16:44:28.559531   16725 main.go:141] libmachine: (addons-996992)   <devices>
	I0914 16:44:28.559537   16725 main.go:141] libmachine: (addons-996992)     <disk type='file' device='cdrom'>
	I0914 16:44:28.559545   16725 main.go:141] libmachine: (addons-996992)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/boot2docker.iso'/>
	I0914 16:44:28.559550   16725 main.go:141] libmachine: (addons-996992)       <target dev='hdc' bus='scsi'/>
	I0914 16:44:28.559555   16725 main.go:141] libmachine: (addons-996992)       <readonly/>
	I0914 16:44:28.559560   16725 main.go:141] libmachine: (addons-996992)     </disk>
	I0914 16:44:28.559567   16725 main.go:141] libmachine: (addons-996992)     <disk type='file' device='disk'>
	I0914 16:44:28.559574   16725 main.go:141] libmachine: (addons-996992)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 16:44:28.559584   16725 main.go:141] libmachine: (addons-996992)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/addons-996992.rawdisk'/>
	I0914 16:44:28.559589   16725 main.go:141] libmachine: (addons-996992)       <target dev='hda' bus='virtio'/>
	I0914 16:44:28.559595   16725 main.go:141] libmachine: (addons-996992)     </disk>
	I0914 16:44:28.559604   16725 main.go:141] libmachine: (addons-996992)     <interface type='network'>
	I0914 16:44:28.559614   16725 main.go:141] libmachine: (addons-996992)       <source network='mk-addons-996992'/>
	I0914 16:44:28.559622   16725 main.go:141] libmachine: (addons-996992)       <model type='virtio'/>
	I0914 16:44:28.559630   16725 main.go:141] libmachine: (addons-996992)     </interface>
	I0914 16:44:28.559636   16725 main.go:141] libmachine: (addons-996992)     <interface type='network'>
	I0914 16:44:28.559648   16725 main.go:141] libmachine: (addons-996992)       <source network='default'/>
	I0914 16:44:28.559656   16725 main.go:141] libmachine: (addons-996992)       <model type='virtio'/>
	I0914 16:44:28.559660   16725 main.go:141] libmachine: (addons-996992)     </interface>
	I0914 16:44:28.559667   16725 main.go:141] libmachine: (addons-996992)     <serial type='pty'>
	I0914 16:44:28.559674   16725 main.go:141] libmachine: (addons-996992)       <target port='0'/>
	I0914 16:44:28.559684   16725 main.go:141] libmachine: (addons-996992)     </serial>
	I0914 16:44:28.559695   16725 main.go:141] libmachine: (addons-996992)     <console type='pty'>
	I0914 16:44:28.559713   16725 main.go:141] libmachine: (addons-996992)       <target type='serial' port='0'/>
	I0914 16:44:28.559728   16725 main.go:141] libmachine: (addons-996992)     </console>
	I0914 16:44:28.559768   16725 main.go:141] libmachine: (addons-996992)     <rng model='virtio'>
	I0914 16:44:28.559789   16725 main.go:141] libmachine: (addons-996992)       <backend model='random'>/dev/random</backend>
	I0914 16:44:28.559798   16725 main.go:141] libmachine: (addons-996992)     </rng>
	I0914 16:44:28.559805   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559810   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559815   16725 main.go:141] libmachine: (addons-996992)   </devices>
	I0914 16:44:28.559820   16725 main.go:141] libmachine: (addons-996992) </domain>
	I0914 16:44:28.559826   16725 main.go:141] libmachine: (addons-996992) 
	I0914 16:44:28.565929   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:0d:74:be in network default
	I0914 16:44:28.566532   16725 main.go:141] libmachine: (addons-996992) Ensuring networks are active...
	I0914 16:44:28.566561   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:28.567152   16725 main.go:141] libmachine: (addons-996992) Ensuring network default is active
	I0914 16:44:28.567386   16725 main.go:141] libmachine: (addons-996992) Ensuring network mk-addons-996992 is active
	I0914 16:44:28.567808   16725 main.go:141] libmachine: (addons-996992) Getting domain xml...
	I0914 16:44:28.568374   16725 main.go:141] libmachine: (addons-996992) Creating domain...
	I0914 16:44:30.007186   16725 main.go:141] libmachine: (addons-996992) Waiting to get IP...
	I0914 16:44:30.007842   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.008313   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.008349   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.008249   16746 retry.go:31] will retry after 193.278123ms: waiting for machine to come up
	I0914 16:44:30.203743   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.204360   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.204412   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.204193   16746 retry.go:31] will retry after 245.945466ms: waiting for machine to come up
	I0914 16:44:30.451736   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.452098   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.452129   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.452044   16746 retry.go:31] will retry after 422.043703ms: waiting for machine to come up
	I0914 16:44:30.875457   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.875934   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.875960   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.875878   16746 retry.go:31] will retry after 473.34114ms: waiting for machine to come up
	I0914 16:44:31.350215   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:31.350612   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:31.350631   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:31.350576   16746 retry.go:31] will retry after 628.442164ms: waiting for machine to come up
	I0914 16:44:31.980705   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:31.981327   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:31.981357   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:31.981288   16746 retry.go:31] will retry after 929.748342ms: waiting for machine to come up
	I0914 16:44:32.912801   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:32.913219   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:32.913246   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:32.913169   16746 retry.go:31] will retry after 956.954722ms: waiting for machine to come up
	I0914 16:44:33.871239   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:33.871624   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:33.871655   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:33.871611   16746 retry.go:31] will retry after 1.433739833s: waiting for machine to come up
	I0914 16:44:35.307302   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:35.307687   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:35.307721   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:35.307633   16746 retry.go:31] will retry after 1.515973944s: waiting for machine to come up
	I0914 16:44:36.826018   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:36.826451   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:36.826473   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:36.826405   16746 retry.go:31] will retry after 1.946747568s: waiting for machine to come up
	I0914 16:44:38.775169   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:38.775648   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:38.775676   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:38.775602   16746 retry.go:31] will retry after 2.771653383s: waiting for machine to come up
	I0914 16:44:41.550519   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:41.550927   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:41.550947   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:41.550892   16746 retry.go:31] will retry after 2.637789254s: waiting for machine to come up
	I0914 16:44:44.190450   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:44.190859   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:44.190881   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:44.190814   16746 retry.go:31] will retry after 3.734364168s: waiting for machine to come up
	I0914 16:44:47.926668   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:47.927158   16725 main.go:141] libmachine: (addons-996992) Found IP for machine: 192.168.39.189
	I0914 16:44:47.927179   16725 main.go:141] libmachine: (addons-996992) Reserving static IP address...
	I0914 16:44:47.927192   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has current primary IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:47.927576   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find host DHCP lease matching {name: "addons-996992", mac: "52:54:00:dd:8c:90", ip: "192.168.39.189"} in network mk-addons-996992
	I0914 16:44:48.085073   16725 main.go:141] libmachine: (addons-996992) DBG | Getting to WaitForSSH function...
	I0914 16:44:48.085105   16725 main.go:141] libmachine: (addons-996992) Reserved static IP address: 192.168.39.189
	I0914 16:44:48.085119   16725 main.go:141] libmachine: (addons-996992) Waiting for SSH to be available...
	I0914 16:44:48.087828   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.088171   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.088203   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.088326   16725 main.go:141] libmachine: (addons-996992) DBG | Using SSH client type: external
	I0914 16:44:48.088342   16725 main.go:141] libmachine: (addons-996992) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa (-rw-------)
	I0914 16:44:48.088390   16725 main.go:141] libmachine: (addons-996992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 16:44:48.088422   16725 main.go:141] libmachine: (addons-996992) DBG | About to run SSH command:
	I0914 16:44:48.088437   16725 main.go:141] libmachine: (addons-996992) DBG | exit 0
	I0914 16:44:48.222175   16725 main.go:141] libmachine: (addons-996992) DBG | SSH cmd err, output: <nil>: 
	I0914 16:44:48.222479   16725 main.go:141] libmachine: (addons-996992) KVM machine creation complete!
	I0914 16:44:48.222803   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:48.250845   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:48.251150   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:48.251340   16725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 16:44:48.251369   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:44:48.253045   16725 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 16:44:48.253064   16725 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 16:44:48.253072   16725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 16:44:48.253081   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.255661   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.256049   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.256068   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.256226   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.256426   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.256654   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.256795   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.256982   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.257155   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.257164   16725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 16:44:48.365411   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:48.365433   16725 main.go:141] libmachine: Detecting the provisioner...
	I0914 16:44:48.365440   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.368483   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.368906   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.368927   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.369091   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.369277   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.369448   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.369560   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.369706   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.369917   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.369928   16725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 16:44:48.478560   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 16:44:48.478635   16725 main.go:141] libmachine: found compatible host: buildroot
	I0914 16:44:48.478650   16725 main.go:141] libmachine: Provisioning with buildroot...
	I0914 16:44:48.478673   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.478938   16725 buildroot.go:166] provisioning hostname "addons-996992"
	I0914 16:44:48.478968   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.479154   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.481754   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.482027   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.482055   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.482238   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.482421   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.482594   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.482715   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.482893   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.483075   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.483090   16725 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-996992 && echo "addons-996992" | sudo tee /etc/hostname
	I0914 16:44:48.603822   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-996992
	
	I0914 16:44:48.603851   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.606556   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.606910   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.606934   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.607103   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.607290   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.607488   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.607658   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.607848   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.608066   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.608093   16725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-996992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-996992/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-996992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 16:44:48.722348   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:48.722378   16725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 16:44:48.722396   16725 buildroot.go:174] setting up certificates
	I0914 16:44:48.722422   16725 provision.go:84] configureAuth start
	I0914 16:44:48.722433   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.722689   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:48.725429   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.725795   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.725827   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.725999   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.728098   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.728440   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.728459   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.728608   16725 provision.go:143] copyHostCerts
	I0914 16:44:48.728683   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 16:44:48.728797   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 16:44:48.728852   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 16:44:48.728919   16725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.addons-996992 san=[127.0.0.1 192.168.39.189 addons-996992 localhost minikube]
	I0914 16:44:48.792378   16725 provision.go:177] copyRemoteCerts
	I0914 16:44:48.792464   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 16:44:48.792493   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.795239   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.795658   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.795697   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.795972   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.796149   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.796365   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.796523   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:48.880497   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 16:44:48.905386   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 16:44:48.927284   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 16:44:48.949470   16725 provision.go:87] duration metric: took 227.034076ms to configureAuth
	I0914 16:44:48.949496   16725 buildroot.go:189] setting minikube options for container-runtime
	I0914 16:44:48.949667   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:44:48.949749   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.952388   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.952770   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.952792   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.953000   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.953189   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.953319   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.953445   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.953626   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.953785   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.953798   16725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 16:44:49.180693   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 16:44:49.180719   16725 main.go:141] libmachine: Checking connection to Docker...
	I0914 16:44:49.180727   16725 main.go:141] libmachine: (addons-996992) Calling .GetURL
	I0914 16:44:49.182000   16725 main.go:141] libmachine: (addons-996992) DBG | Using libvirt version 6000000
	I0914 16:44:49.184271   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.184718   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.184747   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.184859   16725 main.go:141] libmachine: Docker is up and running!
	I0914 16:44:49.184872   16725 main.go:141] libmachine: Reticulating splines...
	I0914 16:44:49.184879   16725 client.go:171] duration metric: took 21.437913259s to LocalClient.Create
	I0914 16:44:49.184951   16725 start.go:167] duration metric: took 21.438013433s to libmachine.API.Create "addons-996992"
	I0914 16:44:49.184967   16725 start.go:293] postStartSetup for "addons-996992" (driver="kvm2")
	I0914 16:44:49.184983   16725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 16:44:49.185012   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.185343   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 16:44:49.185366   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.187583   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.187883   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.187924   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.188038   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.188258   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.188488   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.188629   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.274153   16725 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 16:44:49.278523   16725 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 16:44:49.278558   16725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 16:44:49.278639   16725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 16:44:49.278670   16725 start.go:296] duration metric: took 93.694384ms for postStartSetup
	I0914 16:44:49.278701   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:49.279309   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:49.281961   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.282293   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.282334   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.282507   16725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json ...
	I0914 16:44:49.282702   16725 start.go:128] duration metric: took 21.554522556s to createHost
	I0914 16:44:49.282723   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.284816   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.285125   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.285161   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.285299   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.285489   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.285616   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.285768   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.285889   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:49.286051   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:49.286060   16725 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 16:44:49.394658   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726332289.368573436
	
	I0914 16:44:49.394680   16725 fix.go:216] guest clock: 1726332289.368573436
	I0914 16:44:49.394687   16725 fix.go:229] Guest: 2024-09-14 16:44:49.368573436 +0000 UTC Remote: 2024-09-14 16:44:49.28271319 +0000 UTC m=+21.657617847 (delta=85.860246ms)
	I0914 16:44:49.394705   16725 fix.go:200] guest clock delta is within tolerance: 85.860246ms
	I0914 16:44:49.394710   16725 start.go:83] releasing machines lock for "addons-996992", held for 21.66660282s
	I0914 16:44:49.394730   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.394985   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:49.397445   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.397817   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.397843   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.398094   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398597   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398755   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398864   16725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 16:44:49.398917   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.398947   16725 ssh_runner.go:195] Run: cat /version.json
	I0914 16:44:49.398966   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.401354   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401636   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.401658   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401728   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401838   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.402091   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.402285   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.402338   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.402362   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.402400   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.402603   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.402786   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.402964   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.403097   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.519392   16725 ssh_runner.go:195] Run: systemctl --version
	I0914 16:44:49.525764   16725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 16:44:49.694011   16725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 16:44:49.699486   16725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 16:44:49.699547   16725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 16:44:49.714748   16725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 16:44:49.714768   16725 start.go:495] detecting cgroup driver to use...
	I0914 16:44:49.714822   16725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 16:44:49.729936   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 16:44:49.743531   16725 docker.go:217] disabling cri-docker service (if available) ...
	I0914 16:44:49.743604   16725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 16:44:49.756964   16725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 16:44:49.770590   16725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 16:44:49.893965   16725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 16:44:50.044352   16725 docker.go:233] disabling docker service ...
	I0914 16:44:50.044415   16725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 16:44:50.059044   16725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 16:44:50.073286   16725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 16:44:50.194594   16725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 16:44:50.308467   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 16:44:50.322485   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:50.339320   16725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 16:44:50.339388   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.348795   16725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 16:44:50.348884   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.358384   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.367798   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.377342   16725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 16:44:50.387564   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.397380   16725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.414038   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.424719   16725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 16:44:50.433951   16725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 16:44:50.434029   16725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 16:44:50.446639   16725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 16:44:50.456388   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:50.574976   16725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 16:44:50.661035   16725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 16:44:50.661113   16725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 16:44:50.665670   16725 start.go:563] Will wait 60s for crictl version
	I0914 16:44:50.665731   16725 ssh_runner.go:195] Run: which crictl
	I0914 16:44:50.669237   16725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 16:44:50.707163   16725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 16:44:50.707267   16725 ssh_runner.go:195] Run: crio --version
	I0914 16:44:50.732866   16725 ssh_runner.go:195] Run: crio --version
	I0914 16:44:50.760540   16725 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 16:44:50.761520   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:50.764201   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:50.764600   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:50.764627   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:50.764836   16725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 16:44:50.768563   16725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:50.780282   16725 kubeadm.go:883] updating cluster {Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 16:44:50.780403   16725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:50.780449   16725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 16:44:50.811100   16725 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 16:44:50.811171   16725 ssh_runner.go:195] Run: which lz4
	I0914 16:44:50.815020   16725 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 16:44:50.818901   16725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 16:44:50.818932   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 16:44:51.986671   16725 crio.go:462] duration metric: took 1.171676547s to copy over tarball
	I0914 16:44:51.986742   16725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 16:44:54.089407   16725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.102639006s)
	I0914 16:44:54.089436   16725 crio.go:469] duration metric: took 2.102736316s to extract the tarball
	I0914 16:44:54.089444   16725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 16:44:54.127982   16725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 16:44:54.168690   16725 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 16:44:54.168718   16725 cache_images.go:84] Images are preloaded, skipping loading
	I0914 16:44:54.168726   16725 kubeadm.go:934] updating node { 192.168.39.189 8443 v1.31.1 crio true true} ...
	I0914 16:44:54.168840   16725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-996992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 16:44:54.168921   16725 ssh_runner.go:195] Run: crio config
	I0914 16:44:54.213151   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:44:54.213177   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:54.213187   16725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 16:44:54.213208   16725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-996992 NodeName:addons-996992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 16:44:54.213406   16725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-996992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 16:44:54.213473   16725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 16:44:54.223204   16725 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 16:44:54.223288   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 16:44:54.233103   16725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0914 16:44:54.248690   16725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 16:44:54.264306   16725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0914 16:44:54.280174   16725 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I0914 16:44:54.283808   16725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:54.295236   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:54.407554   16725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:44:54.423857   16725 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992 for IP: 192.168.39.189
	I0914 16:44:54.423885   16725 certs.go:194] generating shared ca certs ...
	I0914 16:44:54.423899   16725 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.424055   16725 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 16:44:54.653328   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt ...
	I0914 16:44:54.653357   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt: {Name:mk83d7136889857d4ed25b0dba1b2df29c745e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.653511   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key ...
	I0914 16:44:54.653521   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key: {Name:mkf6a9abc7e34a97c99f2a5ec51dc983ba6352f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.653592   16725 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 16:44:54.763073   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt ...
	I0914 16:44:54.763103   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt: {Name:mk4ef09caad655cf68088badaf279bd208978abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.763267   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key ...
	I0914 16:44:54.763279   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key: {Name:mk3a507b5dffcb94432777f7f3e5733be1c0f3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.763357   16725 certs.go:256] generating profile certs ...
	I0914 16:44:54.763409   16725 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key
	I0914 16:44:54.763424   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt with IP's: []
	I0914 16:44:54.910505   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt ...
	I0914 16:44:54.910543   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: {Name:mk09179ed269a97b87aa12bc79284cfddef8c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.910700   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key ...
	I0914 16:44:54.910712   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key: {Name:mk74eedc746dd9fd7a750c2f3d02305cb8619c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.910777   16725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca
	I0914 16:44:54.910796   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189]
	I0914 16:44:55.208240   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca ...
	I0914 16:44:55.208270   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca: {Name:mka09606e42dd1ecc4ea29944564740a07d14b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.208415   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca ...
	I0914 16:44:55.208427   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca: {Name:mkbcdd45d86dc41d397758dcbac5534936ad83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.208527   16725 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt
	I0914 16:44:55.208613   16725 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key
	I0914 16:44:55.208661   16725 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key
	I0914 16:44:55.208677   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt with IP's: []
	I0914 16:44:55.276375   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt ...
	I0914 16:44:55.276402   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt: {Name:mkf139a671d75a23c54568782300fb890e1af9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.276575   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key ...
	I0914 16:44:55.276588   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key: {Name:mkf3356386ba33ec54d5db11fd3dfe25bd2233d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.276748   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 16:44:55.276779   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 16:44:55.276803   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 16:44:55.276825   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 16:44:55.277400   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 16:44:55.303836   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 16:44:55.325577   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 16:44:55.348012   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 16:44:55.371496   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 16:44:55.393703   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 16:44:55.416084   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 16:44:55.438231   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 16:44:55.461207   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 16:44:55.484035   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 16:44:55.499790   16725 ssh_runner.go:195] Run: openssl version
	I0914 16:44:55.505113   16725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 16:44:55.515170   16725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.519587   16725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.519665   16725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.525286   16725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 16:44:55.535581   16725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 16:44:55.539357   16725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 16:44:55.539419   16725 kubeadm.go:392] StartCluster: {Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:55.539594   16725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 16:44:55.539672   16725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 16:44:55.575978   16725 cri.go:89] found id: ""
	I0914 16:44:55.576057   16725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 16:44:55.585788   16725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 16:44:55.595409   16725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 16:44:55.604391   16725 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 16:44:55.604417   16725 kubeadm.go:157] found existing configuration files:
	
	I0914 16:44:55.604464   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 16:44:55.612932   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 16:44:55.613006   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 16:44:55.621580   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 16:44:55.629773   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 16:44:55.629834   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 16:44:55.638432   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 16:44:55.646743   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 16:44:55.646820   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 16:44:55.655625   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 16:44:55.663901   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 16:44:55.663966   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 16:44:55.672657   16725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 16:44:55.725872   16725 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 16:44:55.725960   16725 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 16:44:55.830107   16725 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 16:44:55.830268   16725 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 16:44:55.830418   16725 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 16:44:55.839067   16725 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 16:44:55.872082   16725 out.go:235]   - Generating certificates and keys ...
	I0914 16:44:55.872184   16725 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 16:44:55.872270   16725 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 16:44:56.094669   16725 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 16:44:56.228851   16725 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 16:44:56.361198   16725 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 16:44:56.439341   16725 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 16:44:56.528538   16725 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 16:44:56.528694   16725 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-996992 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0914 16:44:56.706339   16725 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 16:44:56.706543   16725 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-996992 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0914 16:44:56.783275   16725 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 16:44:56.956298   16725 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 16:44:57.088304   16725 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 16:44:57.088427   16725 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 16:44:57.464241   16725 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 16:44:57.635302   16725 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 16:44:57.910383   16725 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 16:44:58.013201   16725 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 16:44:58.248188   16725 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 16:44:58.250774   16725 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 16:44:58.253067   16725 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 16:44:58.254997   16725 out.go:235]   - Booting up control plane ...
	I0914 16:44:58.255104   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 16:44:58.255191   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 16:44:58.255668   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 16:44:58.271031   16725 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 16:44:58.280477   16725 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 16:44:58.280530   16725 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 16:44:58.407134   16725 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 16:44:58.407301   16725 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 16:44:58.908397   16725 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.392958ms
	I0914 16:44:58.908509   16725 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 16:45:04.906474   16725 kubeadm.go:310] [api-check] The API server is healthy after 6.002177937s
	I0914 16:45:04.924613   16725 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 16:45:04.939822   16725 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 16:45:04.973453   16725 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 16:45:04.973676   16725 kubeadm.go:310] [mark-control-plane] Marking the node addons-996992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 16:45:04.986235   16725 kubeadm.go:310] [bootstrap-token] Using token: shp2dh.uruxonhtmw8h7ze1
	I0914 16:45:04.987488   16725 out.go:235]   - Configuring RBAC rules ...
	I0914 16:45:04.987689   16725 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 16:45:04.996042   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 16:45:05.007370   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 16:45:05.010610   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 16:45:05.017711   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 16:45:05.022294   16725 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 16:45:05.314010   16725 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 16:45:05.751385   16725 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 16:45:06.313096   16725 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 16:45:06.313132   16725 kubeadm.go:310] 
	I0914 16:45:06.313225   16725 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 16:45:06.313238   16725 kubeadm.go:310] 
	I0914 16:45:06.313395   16725 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 16:45:06.313413   16725 kubeadm.go:310] 
	I0914 16:45:06.313440   16725 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 16:45:06.313497   16725 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 16:45:06.313558   16725 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 16:45:06.313572   16725 kubeadm.go:310] 
	I0914 16:45:06.313771   16725 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 16:45:06.313800   16725 kubeadm.go:310] 
	I0914 16:45:06.313867   16725 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 16:45:06.313881   16725 kubeadm.go:310] 
	I0914 16:45:06.313921   16725 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 16:45:06.314006   16725 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 16:45:06.314098   16725 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 16:45:06.314108   16725 kubeadm.go:310] 
	I0914 16:45:06.314233   16725 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 16:45:06.314351   16725 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 16:45:06.314360   16725 kubeadm.go:310] 
	I0914 16:45:06.314447   16725 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token shp2dh.uruxonhtmw8h7ze1 \
	I0914 16:45:06.314568   16725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 16:45:06.314616   16725 kubeadm.go:310] 	--control-plane 
	I0914 16:45:06.314625   16725 kubeadm.go:310] 
	I0914 16:45:06.314722   16725 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 16:45:06.314730   16725 kubeadm.go:310] 
	I0914 16:45:06.314828   16725 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token shp2dh.uruxonhtmw8h7ze1 \
	I0914 16:45:06.314969   16725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 16:45:06.315496   16725 kubeadm.go:310] W0914 16:44:55.704880     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:45:06.315862   16725 kubeadm.go:310] W0914 16:44:55.705784     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:45:06.315978   16725 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 16:45:06.315991   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:45:06.315997   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:45:06.317740   16725 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 16:45:06.319057   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 16:45:06.331920   16725 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 16:45:06.353277   16725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 16:45:06.353350   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:06.353388   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-996992 minikube.k8s.io/updated_at=2024_09_14T16_45_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=addons-996992 minikube.k8s.io/primary=true
	I0914 16:45:06.375471   16725 ops.go:34] apiserver oom_adj: -16
	I0914 16:45:06.504882   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:07.005141   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:07.505774   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:08.005050   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:08.505830   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:09.005575   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:09.505807   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.005492   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.504986   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.621672   16725 kubeadm.go:1113] duration metric: took 4.268383123s to wait for elevateKubeSystemPrivileges
	I0914 16:45:10.621717   16725 kubeadm.go:394] duration metric: took 15.082301818s to StartCluster
	I0914 16:45:10.621740   16725 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:45:10.621915   16725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:45:10.622431   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:45:10.622689   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 16:45:10.622711   16725 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 16:45:10.622769   16725 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 16:45:10.622896   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:45:10.622926   16725 addons.go:69] Setting helm-tiller=true in profile "addons-996992"
	I0914 16:45:10.622941   16725 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-996992"
	I0914 16:45:10.622950   16725 addons.go:69] Setting cloud-spanner=true in profile "addons-996992"
	I0914 16:45:10.622957   16725 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-996992"
	I0914 16:45:10.622897   16725 addons.go:69] Setting yakd=true in profile "addons-996992"
	I0914 16:45:10.622964   16725 addons.go:234] Setting addon cloud-spanner=true in "addons-996992"
	I0914 16:45:10.622970   16725 addons.go:69] Setting ingress-dns=true in profile "addons-996992"
	I0914 16:45:10.622976   16725 addons.go:234] Setting addon yakd=true in "addons-996992"
	I0914 16:45:10.622983   16725 addons.go:234] Setting addon ingress-dns=true in "addons-996992"
	I0914 16:45:10.622996   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623004   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623021   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.622933   16725 addons.go:69] Setting storage-provisioner=true in profile "addons-996992"
	I0914 16:45:10.623123   16725 addons.go:234] Setting addon storage-provisioner=true in "addons-996992"
	I0914 16:45:10.623142   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623344   16725 addons.go:69] Setting volumesnapshots=true in profile "addons-996992"
	I0914 16:45:10.623366   16725 addons.go:234] Setting addon volumesnapshots=true in "addons-996992"
	I0914 16:45:10.623392   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623393   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623426   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.623459   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623483   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623506   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.623518   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.622951   16725 addons.go:234] Setting addon helm-tiller=true in "addons-996992"
	I0914 16:45:10.622917   16725 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-996992"
	I0914 16:45:10.623622   16725 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-996992"
	I0914 16:45:10.622926   16725 addons.go:69] Setting registry=true in profile "addons-996992"
	I0914 16:45:10.623646   16725 addons.go:234] Setting addon registry=true in "addons-996992"
	I0914 16:45:10.622961   16725 addons.go:69] Setting ingress=true in profile "addons-996992"
	I0914 16:45:10.623658   16725 addons.go:234] Setting addon ingress=true in "addons-996992"
	I0914 16:45:10.623672   16725 addons.go:69] Setting volcano=true in profile "addons-996992"
	I0914 16:45:10.623683   16725 addons.go:234] Setting addon volcano=true in "addons-996992"
	I0914 16:45:10.622914   16725 addons.go:69] Setting inspektor-gadget=true in profile "addons-996992"
	I0914 16:45:10.623704   16725 addons.go:69] Setting default-storageclass=true in profile "addons-996992"
	I0914 16:45:10.623713   16725 addons.go:234] Setting addon inspektor-gadget=true in "addons-996992"
	I0914 16:45:10.623717   16725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-996992"
	I0914 16:45:10.622909   16725 addons.go:69] Setting metrics-server=true in profile "addons-996992"
	I0914 16:45:10.623726   16725 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-996992"
	I0914 16:45:10.622926   16725 addons.go:69] Setting gcp-auth=true in profile "addons-996992"
	I0914 16:45:10.623734   16725 addons.go:234] Setting addon metrics-server=true in "addons-996992"
	I0914 16:45:10.623757   16725 mustload.go:65] Loading cluster: addons-996992
	I0914 16:45:10.623769   16725 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-996992"
	I0914 16:45:10.623852   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623914   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623984   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624008   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624067   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624232   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624260   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624329   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624403   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624403   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624463   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624746   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624786   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624834   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624904   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625011   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625036   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625228   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:45:10.625249   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625262   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625277   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625297   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625391   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625433   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625392   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625866   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625912   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625973   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626017   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.626051   16725 out.go:177] * Verifying Kubernetes components...
	I0914 16:45:10.626257   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626289   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.626630   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626698   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.631422   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:45:10.643737   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0914 16:45:10.644067   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0914 16:45:10.644260   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.643976   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0914 16:45:10.644937   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.644959   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645032   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.645109   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.645308   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.645466   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.645486   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645661   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.645674   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645856   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.645968   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.646318   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.646363   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.646410   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.646443   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.658785   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.658848   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.659642   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.659689   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.668950   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0914 16:45:10.669202   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0914 16:45:10.673147   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.673249   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.674307   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.674330   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.674658   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.674677   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.674857   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.675190   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.675403   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.675458   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.680254   16725 addons.go:234] Setting addon default-storageclass=true in "addons-996992"
	I0914 16:45:10.680332   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.680709   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.680747   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.681169   16725 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-996992"
	I0914 16:45:10.681215   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.681572   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.681620   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.688239   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
	I0914 16:45:10.688935   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.689788   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.689818   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.690304   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.691113   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.691159   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.695403   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0914 16:45:10.695859   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.696143   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0914 16:45:10.697034   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.697057   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.697432   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.698006   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.698052   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.698627   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.699204   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.699227   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.699701   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.699944   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.700002   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0914 16:45:10.700177   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0914 16:45:10.701070   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.701617   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.701642   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.701707   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.702279   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.702857   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.702896   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.703130   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.703659   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.703682   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.704625   16725 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 16:45:10.705330   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.706061   16725 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:10.706078   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 16:45:10.706100   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.706896   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.706941   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.709826   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.710025   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0914 16:45:10.710585   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.710610   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.710663   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.710948   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.711126   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.711257   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.711463   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.712334   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0914 16:45:10.712445   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0914 16:45:10.712635   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.713188   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.713212   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.713557   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.714114   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.714187   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.714670   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.715212   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.715229   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.715594   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.716145   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.716181   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.718969   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.718990   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.719432   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.721094   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0914 16:45:10.721588   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.722010   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.722031   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.723638   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.724834   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0914 16:45:10.724994   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.725170   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.725465   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I0914 16:45:10.727414   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0914 16:45:10.727417   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.727415   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0914 16:45:10.727546   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.727570   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.727636   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:10.727648   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:10.727899   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:10.727912   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.727934   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:10.727946   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:10.727954   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:10.727962   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:10.728003   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.728073   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.728123   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.728189   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:10.728222   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:10.728238   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 16:45:10.728338   16725 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0914 16:45:10.728897   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.728950   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.729209   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I0914 16:45:10.729478   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.729509   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.729637   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.729966   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.729987   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.730120   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.730139   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.730398   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.730596   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.730665   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.731392   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.731538   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.731557   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.731611   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.732178   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.732245   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.732295   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0914 16:45:10.734574   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.734579   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.734688   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.734744   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0914 16:45:10.735001   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.735046   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.735825   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.736192   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.736223   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.736395   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.736576   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.736592   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.736948   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.737179   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.737197   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.737562   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.737591   16725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 16:45:10.737664   16725 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 16:45:10.738728   16725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:10.738746   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 16:45:10.738765   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.739421   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 16:45:10.739440   16725 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 16:45:10.739456   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.742843   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.743195   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.743228   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.743515   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.743739   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.743928   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.744098   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.744454   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0914 16:45:10.744602   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.744871   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.744902   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.745182   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.745420   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.745569   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.745740   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.746637   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.746670   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.746699   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.746715   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.747176   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.748001   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0914 16:45:10.748265   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.748278   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.748857   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.748894   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.749102   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.749338   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.749619   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.750242   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.750258   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.750658   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.751280   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.751315   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.751558   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.753110   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0914 16:45:10.753540   16725 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 16:45:10.753566   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.754094   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.754112   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.754480   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.754671   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.755075   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 16:45:10.755092   16725 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 16:45:10.755111   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.757604   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.758799   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.759063   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 16:45:10.759379   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.759413   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.759591   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.759777   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.759925   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.760043   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.761413   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 16:45:10.764401   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0914 16:45:10.764486   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0914 16:45:10.764653   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 16:45:10.764874   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.765386   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.765410   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.765758   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.765983   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.767246   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 16:45:10.767268   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.768228   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.768265   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.768284   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.768810   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.769040   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.769522   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 16:45:10.769526   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 16:45:10.770047   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0914 16:45:10.770470   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.770948   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.770965   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.771278   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.771438   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.772503   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:10.772561   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 16:45:10.773645   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:10.773685   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 16:45:10.773697   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.774893   16725 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0914 16:45:10.775085   16725 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:10.775109   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 16:45:10.775128   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.775683   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 16:45:10.775853   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36781
	I0914 16:45:10.775979   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0914 16:45:10.776073   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0914 16:45:10.776095   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.776267   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.776399   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0914 16:45:10.776756   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 16:45:10.776773   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 16:45:10.776776   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.776797   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.777646   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.777664   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.778321   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.778341   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.778636   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.779063   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.780072   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.780437   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.780455   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.780479   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.780653   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.780703   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.780834   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.780938   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.781043   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.781324   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.782798   16725 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 16:45:10.784596   16725 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 16:45:10.784747   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.784942   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.785509   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.785544   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.785572   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.785798   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.785836   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.786069   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.786108   16725 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 16:45:10.786123   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 16:45:10.786130   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.786141   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.786311   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.786443   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.786567   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.786865   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0914 16:45:10.786927   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0914 16:45:10.787442   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.787449   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.787928   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.787944   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.788067   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.788078   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.788460   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.788499   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.788727   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.788782   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.789352   16725 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 16:45:10.789703   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.790004   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.790285   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.790558   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.790863   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.790882   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.791031   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.791217   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.791288   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.791539   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.791700   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.791780   16725 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 16:45:10.791796   16725 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 16:45:10.791815   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.793587   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 16:45:10.793606   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.794824   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 16:45:10.794856   16725 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 16:45:10.794874   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.795591   16725 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 16:45:10.796399   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.796850   16725 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:10.796866   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 16:45:10.796869   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.796884   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.796884   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.797475   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.797658   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.797852   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.798050   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.798253   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0914 16:45:10.798969   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.799185   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.799677   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.799700   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.799747   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.799773   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.800030   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.800161   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.800242   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.800507   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.800594   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.800777   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.800785   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
	I0914 16:45:10.800907   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.801232   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.801253   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.801443   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.801712   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.801851   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.801916   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.802030   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.802669   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.803212   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.803239   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.803521   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.803699   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.803742   16725 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 16:45:10.804878   16725 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:10.804895   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 16:45:10.804911   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.805093   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.806643   16725 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 16:45:10.807521   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.807876   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.807899   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.808075   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.808222   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.808318   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.808404   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.808855   16725 out.go:177]   - Using image docker.io/busybox:stable
	I0914 16:45:10.809861   16725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:10.809873   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 16:45:10.809885   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.812131   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0914 16:45:10.812590   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.812888   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.813075   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.813094   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.813367   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.813384   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.813580   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.813714   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.813818   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.813904   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.813982   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.814121   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.815554   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.815750   16725 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:10.815759   16725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 16:45:10.815769   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.819041   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.819420   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.819448   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.819588   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.819749   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.819895   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.820000   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:11.053496   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 16:45:11.053527   16725 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 16:45:11.097975   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 16:45:11.098000   16725 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 16:45:11.124289   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0914 16:45:11.124318   16725 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0914 16:45:11.154793   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 16:45:11.154823   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 16:45:11.167635   16725 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 16:45:11.167664   16725 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 16:45:11.184834   16725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:45:11.184857   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 16:45:11.195055   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:11.210697   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:11.248543   16725 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 16:45:11.248570   16725 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 16:45:11.259633   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:11.260194   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 16:45:11.260211   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 16:45:11.270256   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 16:45:11.270287   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 16:45:11.323366   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:11.328598   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:11.337140   16725 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:11.337159   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 16:45:11.338365   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 16:45:11.338383   16725 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 16:45:11.341295   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:11.348260   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:11.367015   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:45:11.367039   16725 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0914 16:45:11.367119   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 16:45:11.367130   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 16:45:11.373728   16725 node_ready.go:35] waiting up to 6m0s for node "addons-996992" to be "Ready" ...
	I0914 16:45:11.378694   16725 node_ready.go:49] node "addons-996992" has status "Ready":"True"
	I0914 16:45:11.378721   16725 node_ready.go:38] duration metric: took 4.969428ms for node "addons-996992" to be "Ready" ...
	I0914 16:45:11.378733   16725 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:11.384893   16725 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:11.413618   16725 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 16:45:11.413646   16725 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 16:45:11.437356   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 16:45:11.437390   16725 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 16:45:11.454900   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 16:45:11.454926   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 16:45:11.476373   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:45:11.486849   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:11.516082   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:11.516112   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 16:45:11.529228   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 16:45:11.529258   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 16:45:11.532615   16725 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 16:45:11.532647   16725 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 16:45:11.572481   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:11.572521   16725 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 16:45:11.615905   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 16:45:11.615938   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 16:45:11.665213   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:11.685127   16725 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 16:45:11.685162   16725 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 16:45:11.707538   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 16:45:11.707569   16725 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 16:45:11.735433   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:11.795975   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 16:45:11.796003   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 16:45:11.860384   16725 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 16:45:11.860415   16725 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 16:45:11.885579   16725 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:11.885602   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 16:45:11.939398   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 16:45:11.939428   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 16:45:12.071279   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:12.076177   16725 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 16:45:12.076212   16725 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 16:45:12.193047   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 16:45:12.193067   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 16:45:12.350531   16725 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:12.350553   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 16:45:12.571518   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:12.589231   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 16:45:12.589261   16725 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 16:45:12.822425   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 16:45:12.822449   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 16:45:12.981922   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 16:45:12.981946   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 16:45:13.289971   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:13.289994   16725 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 16:45:13.432574   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:13.662491   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:13.691925   16725 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.507036024s)
	I0914 16:45:13.691964   16725 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 16:45:13.983899   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.788807127s)
	I0914 16:45:13.983965   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:13.983978   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:13.984306   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:13.984324   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:13.984333   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:13.984341   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:13.984593   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:13.984610   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:14.263792   16725 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-996992" context rescaled to 1 replicas
	I0914 16:45:15.107060   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.896323905s)
	I0914 16:45:15.107126   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.107142   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.107451   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.107471   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.107471   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.107483   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.107491   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.107708   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.107721   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.448055   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:15.802644   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.542973946s)
	I0914 16:45:15.802658   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.479250603s)
	I0914 16:45:15.802693   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.802710   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.802698   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.802765   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803023   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803044   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803090   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.803101   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.803112   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803052   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803049   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803183   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.803193   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.803200   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803427   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803495   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803536   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803549   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.804919   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.804939   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:17.807492   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 16:45:17.807535   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:17.810372   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:17.810780   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:17.810816   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:17.810957   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:17.811136   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:17.811330   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:17.811482   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:17.922407   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:18.212498   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 16:45:18.361996   16725 addons.go:234] Setting addon gcp-auth=true in "addons-996992"
	I0914 16:45:18.362064   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:18.362615   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:18.362669   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:18.378887   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I0914 16:45:18.379466   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:18.380023   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:18.380052   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:18.380398   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:18.380840   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:18.380878   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:18.397216   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0914 16:45:18.397733   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:18.398249   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:18.398279   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:18.398627   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:18.398815   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:18.400541   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:18.400765   16725 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 16:45:18.400791   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:18.403800   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:18.404197   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:18.404228   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:18.404369   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:18.404558   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:18.404701   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:18.404877   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:19.293405   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.964772062s)
	I0914 16:45:19.293466   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293469   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.952144922s)
	I0914 16:45:19.293515   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293537   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293479   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293535   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.945253778s)
	I0914 16:45:19.293646   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.817244353s)
	I0914 16:45:19.293653   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293667   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293671   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293682   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293679   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.806797648s)
	I0914 16:45:19.293729   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.628484398s)
	I0914 16:45:19.293741   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293749   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293760   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293762   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293784   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.558321173s)
	I0914 16:45:19.293801   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293811   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293887   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.222576723s)
	W0914 16:45:19.293930   16725 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:19.293976   16725 retry.go:31] will retry after 361.189184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:19.294023   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294024   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294035   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294042   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294038   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294048   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294054   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294066   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294075   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294081   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294098   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.722532317s)
	I0914 16:45:19.294126   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294139   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294145   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294181   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294190   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294128   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294211   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294219   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294225   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294243   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294268   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294198   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294284   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294288   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294296   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294304   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294311   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294338   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294352   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294363   16725 addons.go:475] Verifying addon metrics-server=true in "addons-996992"
	I0914 16:45:19.294368   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294386   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294392   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294399   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294405   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294869   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294897   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294903   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294910   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294916   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294965   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294985   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294993   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.295199   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.295218   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.295240   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.295246   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297056   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297087   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297093   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297100   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.297106   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.297194   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297214   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297221   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297458   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297469   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297479   16725 addons.go:475] Verifying addon ingress=true in "addons-996992"
	I0914 16:45:19.297608   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297828   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297852   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297858   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297867   16725 addons.go:475] Verifying addon registry=true in "addons-996992"
	I0914 16:45:19.297564   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297990   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297592   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.298014   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.299712   16725 out.go:177] * Verifying ingress addon...
	I0914 16:45:19.300586   16725 out.go:177] * Verifying registry addon...
	I0914 16:45:19.300595   16725 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-996992 service yakd-dashboard -n yakd-dashboard
	
	I0914 16:45:19.302049   16725 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 16:45:19.302931   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 16:45:19.344991   16725 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 16:45:19.345020   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:19.345383   16725 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 16:45:19.345406   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:19.372208   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.372232   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.372506   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.372522   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 16:45:19.372615   16725 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 16:45:19.383702   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.383730   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.384014   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.384038   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.655329   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:20.045338   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:20.050206   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.055704   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.310964   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.311082   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.682921   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.020377333s)
	I0914 16:45:20.682968   16725 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.282185443s)
	I0914 16:45:20.682969   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:20.682986   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:20.683282   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:20.683301   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:20.683311   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:20.683320   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:20.683581   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:20.683592   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:20.683609   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:20.683625   16725 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-996992"
	I0914 16:45:20.684836   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:20.685652   16725 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 16:45:20.687381   16725 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 16:45:20.688045   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 16:45:20.688683   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 16:45:20.688704   16725 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 16:45:20.699808   16725 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 16:45:20.699830   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:20.760828   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 16:45:20.760854   16725 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 16:45:20.806360   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.808190   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.876308   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:20.876331   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 16:45:20.962823   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:21.194364   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.308241   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.308330   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:21.459476   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.804100826s)
	I0914 16:45:21.459541   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:21.459563   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:21.459818   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:21.459856   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:21.459870   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:21.459878   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:21.460217   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:21.460243   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:21.460259   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:21.692747   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.824936   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.825463   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.037036   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.074172157s)
	I0914 16:45:22.037089   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:22.037108   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:22.037385   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:22.037437   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:22.037456   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:22.037470   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:22.037478   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:22.037812   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:22.037826   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:22.039855   16725 addons.go:475] Verifying addon gcp-auth=true in "addons-996992"
	I0914 16:45:22.041190   16725 out.go:177] * Verifying gcp-auth addon...
	I0914 16:45:22.043315   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 16:45:22.062131   16725 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:45:22.062174   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:22.206114   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.305919   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.307902   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:22.397413   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:22.548345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:22.692725   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.829322   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.829369   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.047052   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:23.193924   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.306209   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.307371   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.547918   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:23.693915   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.806505   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.808215   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.047225   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:24.195089   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.311883   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.312000   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.547845   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:24.693213   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.807438   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.807893   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.892150   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:25.047378   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:25.193183   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.308297   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:25.308656   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.547425   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:25.695489   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.807000   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.807151   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.047297   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:26.192551   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.306770   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.307157   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:26.548995   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:26.692772   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.807385   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.808205   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.052696   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:27.195215   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.307090   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.307252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:27.392113   16725 pod_ready.go:98] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.189 HostIPs:[{IP:192.168.39
.189}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-14 16:45:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:45:14 +0000 UTC,FinishedAt:2024-09-14 16:45:24 +0000 UTC,ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c Started:0xc0029481a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d01430} {Name:kube-api-access-gv6ld MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d01440}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:45:27.392141   16725 pod_ready.go:82] duration metric: took 16.007223581s for pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace to be "Ready" ...
	E0914 16:45:27.392157   16725 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.189 HostIPs:[{IP:192.168.39.189}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-14 16:45:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:45:14 +0000 UTC,FinishedAt:2024-09-14 16:45:24 +0000 UTC,ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c Started:0xc0029481a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d01430} {Name:kube-api-access-gv6ld MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d01440}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:45:27.392172   16725 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:27.547236   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:27.692797   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.805927   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.808529   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.046967   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:28.193365   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.306453   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.306996   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.547515   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:28.692136   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.805564   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.808148   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.047966   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:29.192746   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.306293   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.307762   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.397654   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:29.546652   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:29.692992   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.806654   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.807372   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.048650   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.200286   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.307076   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.307351   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.547222   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.692129   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.806326   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.806696   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.047541   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.193463   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.306316   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.306957   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.400132   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:31.547554   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.691976   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.806039   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.807935   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.046311   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.193223   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.305895   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:32.306116   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.547547   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.693274   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.806864   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.807025   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.046675   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.192788   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.307118   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.307576   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.547264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.691956   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.805950   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.807272   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.898447   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:34.046538   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.193111   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.306594   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:34.306780   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.547534   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.693573   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.806532   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.807796   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.049173   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.193341   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.306957   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.307826   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.547124   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.693884   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.813240   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.813472   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.898771   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:36.046736   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.192647   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.307028   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:36.307153   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:36.550055   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.692268   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.808196   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:36.808552   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.047345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.192191   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.306427   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:37.306615   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.546905   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.693413   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.806415   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:37.806625   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.906344   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:38.047348   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.192226   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.307259   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.308416   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:38.549806   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.693516   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.806779   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.807117   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:39.047166   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.193398   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.305796   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.306965   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:39.546569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.692192   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.807726   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.809337   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:40.047029   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.198177   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.306487   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:40.306759   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.398546   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:40.546426   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.692436   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.807118   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.808125   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.048639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.193023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.306385   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.307022   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:41.546832   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.692299   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.806619   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.807745   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.051127   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.193235   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.306207   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.307023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:42.547148   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.692114   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.807237   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.807551   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:42.898978   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:43.047443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.192717   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.306429   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.307536   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:43.547361   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.692472   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.806328   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.806544   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.047256   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.193079   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.307376   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.307539   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.546600   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.947832   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.948674   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.949499   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.954329   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:45.047207   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.192019   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.307059   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:45.307388   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:45.546442   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.693013   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.807362   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:45.808026   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.049098   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.193102   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.307108   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:46.307421   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.548460   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.692457   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.807661   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:46.807813   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.048241   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.192214   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.306248   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.306671   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:47.398101   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:47.547639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.693105   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.806345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:47.806838   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.898498   16725 pod_ready.go:93] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.898523   16725 pod_ready.go:82] duration metric: took 20.506341334s for pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.898537   16725 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.903604   16725 pod_ready.go:93] pod "etcd-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.903629   16725 pod_ready.go:82] duration metric: took 5.083745ms for pod "etcd-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.903640   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.908397   16725 pod_ready.go:93] pod "kube-apiserver-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.908426   16725 pod_ready.go:82] duration metric: took 4.777526ms for pod "kube-apiserver-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.908439   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.918027   16725 pod_ready.go:93] pod "kube-controller-manager-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.918048   16725 pod_ready.go:82] duration metric: took 9.601319ms for pod "kube-controller-manager-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.918056   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ll2cd" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.923629   16725 pod_ready.go:93] pod "kube-proxy-ll2cd" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.923659   16725 pod_ready.go:82] duration metric: took 5.594635ms for pod "kube-proxy-ll2cd" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.923671   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:48.047579   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.193569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.296378   16725 pod_ready.go:93] pod "kube-scheduler-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:48.296405   16725 pod_ready.go:82] duration metric: took 372.727475ms for pod "kube-scheduler-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:48.296414   16725 pod_ready.go:39] duration metric: took 36.917662966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:48.296429   16725 api_server.go:52] waiting for apiserver process to appear ...
	I0914 16:45:48.296474   16725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:45:48.307319   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:48.308769   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.333952   16725 api_server.go:72] duration metric: took 37.711200096s to wait for apiserver process to appear ...
	I0914 16:45:48.333977   16725 api_server.go:88] waiting for apiserver healthz status ...
	I0914 16:45:48.333995   16725 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0914 16:45:48.338947   16725 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0914 16:45:48.340137   16725 api_server.go:141] control plane version: v1.31.1
	I0914 16:45:48.340167   16725 api_server.go:131] duration metric: took 6.183106ms to wait for apiserver health ...
	I0914 16:45:48.340177   16725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 16:45:48.504689   16725 system_pods.go:59] 18 kube-system pods found
	I0914 16:45:48.504742   16725 system_pods.go:61] "coredns-7c65d6cfc9-9p6z9" [8b60a487-876e-49a1-9a02-ff29269e6cd9] Running
	I0914 16:45:48.504756   16725 system_pods.go:61] "csi-hostpath-attacher-0" [fc163c87-b3c1-44fb-b23a-daf71f2476fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:48.504781   16725 system_pods.go:61] "csi-hostpath-resizer-0" [cb3dc269-4b68-41cc-8dac-f4e4cac02923] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:48.504800   16725 system_pods.go:61] "csi-hostpathplugin-j8fzx" [4c687703-e40a-48df-9dbf-ef6c5b71f2c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:48.504806   16725 system_pods.go:61] "etcd-addons-996992" [51dddf60-7bb8-4d07-b593-4841d49d04c6] Running
	I0914 16:45:48.504812   16725 system_pods.go:61] "kube-apiserver-addons-996992" [df7a9746-e613-42b3-99ae-376c32e5c9c5] Running
	I0914 16:45:48.504818   16725 system_pods.go:61] "kube-controller-manager-addons-996992" [d0f2e301-3365-4b32-8aa6-583d2794b9d1] Running
	I0914 16:45:48.504829   16725 system_pods.go:61] "kube-ingress-dns-minikube" [9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18] Running
	I0914 16:45:48.504835   16725 system_pods.go:61] "kube-proxy-ll2cd" [77c4fbce-cceb-4918-871f-5d17932941f1] Running
	I0914 16:45:48.504840   16725 system_pods.go:61] "kube-scheduler-addons-996992" [e9922ffd-3c61-47c3-a0d0-2063f8e8484d] Running
	I0914 16:45:48.504848   16725 system_pods.go:61] "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:48.504854   16725 system_pods.go:61] "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
	I0914 16:45:48.504866   16725 system_pods.go:61] "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 16:45:48.504876   16725 system_pods.go:61] "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:48.504890   16725 system_pods.go:61] "snapshot-controller-56fcc65765-cc2vz" [4663132f-a286-4aed-8845-8c2fb27ac546] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.504900   16725 system_pods.go:61] "snapshot-controller-56fcc65765-l6fxq" [719471e2-a6ad-4742-92a5-2ca1874e373c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.504906   16725 system_pods.go:61] "storage-provisioner" [042983c1-0076-46d0-8022-ff8afde6de61] Running
	I0914 16:45:48.504920   16725 system_pods.go:61] "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:45:48.504928   16725 system_pods.go:74] duration metric: took 164.743813ms to wait for pod list to return data ...
	I0914 16:45:48.504942   16725 default_sa.go:34] waiting for default service account to be created ...
	I0914 16:45:48.546545   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.692466   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.696319   16725 default_sa.go:45] found service account: "default"
	I0914 16:45:48.696367   16725 default_sa.go:55] duration metric: took 191.418164ms for default service account to be created ...
	I0914 16:45:48.696376   16725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 16:45:48.808682   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:48.808951   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.920544   16725 system_pods.go:86] 18 kube-system pods found
	I0914 16:45:48.920575   16725 system_pods.go:89] "coredns-7c65d6cfc9-9p6z9" [8b60a487-876e-49a1-9a02-ff29269e6cd9] Running
	I0914 16:45:48.920585   16725 system_pods.go:89] "csi-hostpath-attacher-0" [fc163c87-b3c1-44fb-b23a-daf71f2476fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:48.920592   16725 system_pods.go:89] "csi-hostpath-resizer-0" [cb3dc269-4b68-41cc-8dac-f4e4cac02923] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:48.920600   16725 system_pods.go:89] "csi-hostpathplugin-j8fzx" [4c687703-e40a-48df-9dbf-ef6c5b71f2c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:48.920604   16725 system_pods.go:89] "etcd-addons-996992" [51dddf60-7bb8-4d07-b593-4841d49d04c6] Running
	I0914 16:45:48.920608   16725 system_pods.go:89] "kube-apiserver-addons-996992" [df7a9746-e613-42b3-99ae-376c32e5c9c5] Running
	I0914 16:45:48.920612   16725 system_pods.go:89] "kube-controller-manager-addons-996992" [d0f2e301-3365-4b32-8aa6-583d2794b9d1] Running
	I0914 16:45:48.920616   16725 system_pods.go:89] "kube-ingress-dns-minikube" [9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18] Running
	I0914 16:45:48.920619   16725 system_pods.go:89] "kube-proxy-ll2cd" [77c4fbce-cceb-4918-871f-5d17932941f1] Running
	I0914 16:45:48.920623   16725 system_pods.go:89] "kube-scheduler-addons-996992" [e9922ffd-3c61-47c3-a0d0-2063f8e8484d] Running
	I0914 16:45:48.920629   16725 system_pods.go:89] "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:48.920633   16725 system_pods.go:89] "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
	I0914 16:45:48.920640   16725 system_pods.go:89] "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 16:45:48.920645   16725 system_pods.go:89] "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:48.920652   16725 system_pods.go:89] "snapshot-controller-56fcc65765-cc2vz" [4663132f-a286-4aed-8845-8c2fb27ac546] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.920660   16725 system_pods.go:89] "snapshot-controller-56fcc65765-l6fxq" [719471e2-a6ad-4742-92a5-2ca1874e373c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.920664   16725 system_pods.go:89] "storage-provisioner" [042983c1-0076-46d0-8022-ff8afde6de61] Running
	I0914 16:45:48.920669   16725 system_pods.go:89] "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:45:48.920677   16725 system_pods.go:126] duration metric: took 224.295642ms to wait for k8s-apps to be running ...
	I0914 16:45:48.920684   16725 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 16:45:48.920724   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 16:45:48.937847   16725 system_svc.go:56] duration metric: took 17.154195ms WaitForService to wait for kubelet
	I0914 16:45:48.937878   16725 kubeadm.go:582] duration metric: took 38.315130323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:45:48.937899   16725 node_conditions.go:102] verifying NodePressure condition ...
	I0914 16:45:49.048228   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.098325   16725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 16:45:49.098385   16725 node_conditions.go:123] node cpu capacity is 2
	I0914 16:45:49.098398   16725 node_conditions.go:105] duration metric: took 160.494508ms to run NodePressure ...
	I0914 16:45:49.098410   16725 start.go:241] waiting for startup goroutines ...
	I0914 16:45:49.192082   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.306218   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.307323   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:49.547409   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.692860   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.807027   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.813086   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.047555   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.192775   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.306264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.306398   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:50.547544   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.692765   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.806990   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.807136   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.047419   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.192036   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.306859   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:51.307240   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.546636   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.692296   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.807294   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.807691   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:52.046611   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.193349   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.306306   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.307173   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:52.547079   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.691900   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.806428   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.807573   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:53.046699   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.192419   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.306755   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:53.307712   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.552730   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.693022   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.805998   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.807006   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.047063   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.195701   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.308158   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:54.308170   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.547515   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.693931   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.806765   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.807175   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.047742   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.194005   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.306209   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.307788   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:55.546984   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.693279   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.807163   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:55.807663   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.052639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.193934   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.317185   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:56.322650   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.547946   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.692907   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.812014   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:56.812358   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.047127   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.193740   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.307143   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:57.307407   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.547562   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.693212   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.806535   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:57.806710   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.046520   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.197798   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.307070   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:58.307765   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.547433   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.692299   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.806831   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:58.807481   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.046934   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.193174   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.307443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:59.307669   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.548010   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.693092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.807151   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:59.808268   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.047359   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.478614   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:00.479137   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.479508   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.547104   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.692282   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.806824   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:00.807536   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.047697   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.193726   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.307966   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.308014   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:01.547201   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.695313   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.806792   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.807383   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:02.047607   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.192475   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.306347   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:02.306833   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.547377   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.692730   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.807047   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.807463   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:03.047309   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.195015   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.307647   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:03.307817   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.547787   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.692947   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.807157   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.807344   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:04.048006   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.192987   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.318549   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:04.318994   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.547383   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.693036   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.805898   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.807705   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.047059   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:05.193631   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.306513   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.306799   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:05.546629   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:05.692830   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.806493   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.806880   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.046580   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:06.192054   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.306131   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.307575   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:06.547492   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:06.692615   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.806368   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.806725   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.046496   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:07.192627   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.311557   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.311733   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:07.547642   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:07.693080   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.806770   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.807306   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.047553   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:08.193062   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:08.306216   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.306825   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:08.547432   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:08.693198   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:08.806659   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.807567   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:09.046856   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:09.193443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:09.306323   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.308192   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:09.547245   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:09.692407   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:09.807106   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.809300   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.050073   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:10.192821   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:10.307140   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.307386   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:10.547008   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:10.692575   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:10.806819   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.808404   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.047532   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:11.194303   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:11.306378   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.306880   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:11.547761   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:11.692624   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:11.811199   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.811447   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:12.047345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:12.193374   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:12.306143   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.308049   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:12.546681   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:12.693001   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:12.806422   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.806748   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:13.046519   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:13.632563   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:13.632569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:13.633214   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.633245   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:13.692680   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:13.806502   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.808264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:14.047109   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:14.193313   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:14.305768   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.307495   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:14.547099   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:14.693347   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:14.806645   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.807536   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:15.046459   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:15.192401   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:15.307521   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:15.307739   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.548447   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:15.693811   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:15.805918   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.806859   16725 kapi.go:107] duration metric: took 56.503923107s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 16:46:16.046482   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:16.192234   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:16.306338   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:16.547377   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.214224   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.214920   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.218540   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.221430   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.315378   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.551452   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.694597   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.806145   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.046558   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:18.192092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:18.305661   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.547539   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:18.692638   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:18.806657   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.053521   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:19.193880   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:19.311277   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.546622   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:19.693339   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:19.806264   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.046500   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:20.192998   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:20.306067   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.547197   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:20.692597   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:20.807811   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:21.047801   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:21.192778   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:21.306452   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:21.547311   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:21.693049   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:21.827840   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.047273   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:22.192310   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:22.311209   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.838565   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:22.838932   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.839032   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.047177   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:23.193709   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.306794   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:23.547596   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:23.692382   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.807214   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:24.046485   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:24.192341   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:24.307183   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:24.546672   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:24.693935   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:24.810550   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:25.050252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:25.195092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:25.307161   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:25.549697   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:25.697541   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:25.806080   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:26.046708   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:26.192705   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:26.306674   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:26.547507   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:26.693182   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:26.806532   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:27.049050   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:27.196252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:27.308707   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:27.547747   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:27.692965   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:27.807158   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:28.048325   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:28.193153   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:28.306290   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:28.546673   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:28.692592   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:28.806423   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:29.047119   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:29.193334   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:29.306364   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:29.547235   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:29.697436   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:29.807863   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:30.055007   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:30.193621   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:30.306752   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:30.547587   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:30.693117   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:30.806296   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:31.046378   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:31.193611   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:31.306059   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:31.546599   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:31.692393   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:31.806618   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:32.047197   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:32.199989   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:32.658958   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:32.659665   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:32.693594   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:32.813854   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:33.046793   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:33.194323   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:33.306864   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:33.547559   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:33.693855   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:33.808730   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:34.048970   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:34.194651   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:34.307090   16725 kapi.go:107] duration metric: took 1m15.005037262s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 16:46:34.546875   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:34.694388   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:35.083057   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:35.193569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:35.549326   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:35.692860   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:36.047852   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:36.192896   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:36.547520   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:36.693004   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:37.047621   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.192802   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:37.547115   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.707625   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:38.047500   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.192485   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:38.547359   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.692532   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:39.048815   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.192850   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:39.547858   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.693239   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:40.048117   16725 kapi.go:107] duration metric: took 1m18.00480647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 16:46:40.049808   16725 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-996992 cluster.
	I0914 16:46:40.050997   16725 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 16:46:40.052104   16725 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 16:46:40.193221   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:40.693480   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:41.192757   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:41.707864   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:42.193577   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:42.693176   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:43.192560   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.006023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.193094   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.693734   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:45.193109   16725 kapi.go:107] duration metric: took 1m24.505060721s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 16:46:45.194961   16725 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0914 16:46:45.196167   16725 addons.go:510] duration metric: took 1m34.573399474s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0914 16:46:45.196214   16725 start.go:246] waiting for cluster config update ...
	I0914 16:46:45.196250   16725 start.go:255] writing updated cluster config ...
	I0914 16:46:45.196519   16725 ssh_runner.go:195] Run: rm -f paused
	I0914 16:46:45.248928   16725 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 16:46:45.250609   16725 out.go:177] * Done! kubectl is now configured to use "addons-996992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.686459331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333098686432805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b3eb465-bc8a-4d2a-b43d-ce4c4556e23c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.686931698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33c7b474-bb82-4cd9-bc2e-710b56b32df2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.686990644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33c7b474-bb82-4cd9-bc2e-710b56b32df2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.687537779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172633
2355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33c7b474-bb82-4cd9-bc2e-710b56b32df2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.726330338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2a68583-a1f5-444c-b3e3-12b1e1bd813a name=/runtime.v1.RuntimeService/Version
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.726411497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2a68583-a1f5-444c-b3e3-12b1e1bd813a name=/runtime.v1.RuntimeService/Version
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.727318912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27124a16-f34e-461e-9929-565086b6992e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.728890377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333098728864003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27124a16-f34e-461e-9929-565086b6992e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.729673607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ea436df-651d-4844-b8e8-44f9dad45e9c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.729741932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ea436df-651d-4844-b8e8-44f9dad45e9c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.730208822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172633
2355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ea436df-651d-4844-b8e8-44f9dad45e9c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.763678942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e981472b-5db8-4e4a-a9d0-e2eb47510ff0 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.763778136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e981472b-5db8-4e4a-a9d0-e2eb47510ff0 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.765147159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ada9acc-afd7-47df-9dc6-84fe7c013de8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.766718882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333098766690216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ada9acc-afd7-47df-9dc6-84fe7c013de8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.767369259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36ea9783-6c78-4e15-bf8d-17fdc4774426 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.767441682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36ea9783-6c78-4e15-bf8d-17fdc4774426 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.768537434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172633
2355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36ea9783-6c78-4e15-bf8d-17fdc4774426 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.810828633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e50eef1e-45d4-466a-b5fb-09bb9932f079 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.810905403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e50eef1e-45d4-466a-b5fb-09bb9932f079 name=/runtime.v1.RuntimeService/Version
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.812183932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58267fb4-e089-410e-b4d9-86de1463f6fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.813320739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333098813291668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58267fb4-e089-410e-b4d9-86de1463f6fb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.813921364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c952ac99-95e0-4d03-b72f-99feba383db8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.813983281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c952ac99-95e0-4d03-b72f-99feba383db8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 16:58:18 addons-996992 crio[669]: time="2024-09-14 16:58:18.814471193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ac3510c9f6c1dea06ada5fbab155f33ab2f7e362c024a53f9eb549848d590d,PodSandboxId:8895260d1ecf95cc546e0a3a5fe468cc59c47341f20b34592818f6875324ebe4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332379159744959,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-8zsm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f9dfddf-322e-4827-aabc-f4ce5421023d,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ff6cfbdcb2b0b82ee5e2e8f2ee08b0aa163afbc9804a1b4c2d4e6a1fb1901,PodSandboxId:d3b092d9b1ce5c50aa70440b30436322c5cd6f32a9540386f94dfb977e9c1f68,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726332378999567483,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5rv5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 60f1a90d-ea04-408a-a27a-bd202e3b8875,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172633
2355816734887,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724
c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image
:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1
001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792c
bf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c952ac99-95e0-4d03-b72f-99feba383db8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70d1675e8bf61       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   5c1407129f05a       hello-world-app-55bf9c44b4-lf7nc
	a2c842e27b9de       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   8164a72938eec       nginx
	b1fc29dced5ee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   5ac3aa6b762ea       gcp-auth-89d5ffd79-smf6s
	22ac3510c9f6c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   8895260d1ecf9       ingress-nginx-admission-patch-8zsm9
	e14ff6cfbdcb2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   d3b092d9b1ce5       ingress-nginx-admission-create-5rv5k
	e8c78f14b17e7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   1aa3f3cb51004       metrics-server-84c5f94fbc-zpthv
	7f90cf12b4313       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   5527e3f395706       storage-provisioner
	b39fe7c77bdab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   1c0a11c1d7f7c       coredns-7c65d6cfc9-9p6z9
	7636b49f23d35       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   816f86f6b29ab       kube-proxy-ll2cd
	62ccf13035320       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   ce41d60ed0525       kube-scheduler-addons-996992
	9e180103456d1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   25abc346c2516       kube-apiserver-addons-996992
	244c994b666b9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   15fa01d2627fb       etcd-addons-996992
	b6da48572a3f2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   476c6d8937274       kube-controller-manager-addons-996992
	
	
	==> coredns [b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f] <==
	[INFO] 127.0.0.1:41202 - 28347 "HINFO IN 1673696776001178715.7846265792048933670. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013145705s
	[INFO] 10.244.0.6:33528 - 34854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000861082s
	[INFO] 10.244.0.6:33528 - 56874 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000509102s
	[INFO] 10.244.0.6:49882 - 44252 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179055s
	[INFO] 10.244.0.6:49882 - 26330 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086967s
	[INFO] 10.244.0.6:56229 - 8877 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096878s
	[INFO] 10.244.0.6:56229 - 29867 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094082s
	[INFO] 10.244.0.6:60530 - 59893 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128321s
	[INFO] 10.244.0.6:60530 - 13042 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157038s
	[INFO] 10.244.0.6:59365 - 64212 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145076s
	[INFO] 10.244.0.6:59365 - 23496 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000053277s
	[INFO] 10.244.0.6:38693 - 47172 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089079s
	[INFO] 10.244.0.6:38693 - 34881 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000266922s
	[INFO] 10.244.0.6:57815 - 40259 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061127s
	[INFO] 10.244.0.6:57815 - 21061 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054151s
	[INFO] 10.244.0.6:54487 - 49983 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049761s
	[INFO] 10.244.0.6:54487 - 43833 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105815s
	[INFO] 10.244.0.22:49719 - 23493 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000476893s
	[INFO] 10.244.0.22:58157 - 28044 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000101631s
	[INFO] 10.244.0.22:49755 - 34273 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139903s
	[INFO] 10.244.0.22:34695 - 62237 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115272s
	[INFO] 10.244.0.22:38487 - 8705 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122294s
	[INFO] 10.244.0.22:34286 - 15998 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008471s
	[INFO] 10.244.0.22:36588 - 36023 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002660038s
	[INFO] 10.244.0.22:43999 - 38790 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000715506s
	
	
	==> describe nodes <==
	Name:               addons-996992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-996992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=addons-996992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T16_45_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-996992
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:45:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-996992
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 16:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 16:56:39 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 16:56:39 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 16:56:39 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 16:56:39 +0000   Sat, 14 Sep 2024 16:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    addons-996992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e2b58bc38a04bd6877d6321c8c25636
	  System UUID:                5e2b58bc-38a0-4bd6-877d-6321c8c25636
	  Boot ID:                    bc515e37-5984-41bc-90ff-4a341c7992e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-lf7nc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gcp-auth                    gcp-auth-89d5ffd79-smf6s                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-9p6z9                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-996992                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-996992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-996992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ll2cd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-996992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-zpthv          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-996992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-996992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-996992 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-996992 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-996992 event: Registered Node addons-996992 in Controller
	
	
	==> dmesg <==
	[  +6.057467] kauditd_printk_skb: 65 callbacks suppressed
	[ +26.543298] kauditd_printk_skb: 4 callbacks suppressed
	[Sep14 16:46] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.726173] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.858248] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.366113] kauditd_printk_skb: 49 callbacks suppressed
	[  +7.648867] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.829438] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.753456] kauditd_printk_skb: 16 callbacks suppressed
	[Sep14 16:47] kauditd_printk_skb: 40 callbacks suppressed
	[Sep14 16:48] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:49] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:54] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.088825] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.292544] kauditd_printk_skb: 15 callbacks suppressed
	[Sep14 16:55] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.127400] kauditd_printk_skb: 12 callbacks suppressed
	[ +26.498820] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.490747] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.865254] kauditd_printk_skb: 29 callbacks suppressed
	[Sep14 16:56] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.046455] kauditd_printk_skb: 17 callbacks suppressed
	[Sep14 16:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.308277] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309] <==
	{"level":"info","ts":"2024-09-14T16:46:32.643434Z","caller":"traceutil/trace.go:171","msg":"trace[994433346] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1100; }","duration":"334.684033ms","start":"2024-09-14T16:46:32.308725Z","end":"2024-09-14T16:46:32.643410Z","steps":["trace[994433346] 'agreement among raft nodes before linearized reading'  (duration: 333.442918ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:32.643688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:32.308692Z","time spent":"334.981095ms","remote":"127.0.0.1:39776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-14T16:46:43.987795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.329366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:46:43.987957Z","caller":"traceutil/trace.go:171","msg":"trace[82175678] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1170; }","duration":"310.494191ms","start":"2024-09-14T16:46:43.677445Z","end":"2024-09-14T16:46:43.987939Z","steps":["trace[82175678] 'range keys from in-memory index tree'  (duration: 310.27123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:43.988032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:43.677409Z","time spent":"310.610725ms","remote":"127.0.0.1:39968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-14T16:46:43.988534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.452534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-14T16:46:43.988944Z","caller":"traceutil/trace.go:171","msg":"trace[925455638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1170; }","duration":"100.863861ms","start":"2024-09-14T16:46:43.888062Z","end":"2024-09-14T16:46:43.988926Z","steps":["trace[925455638] 'range keys from in-memory index tree'  (duration: 100.282057ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:01.401239Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1531}
	{"level":"info","ts":"2024-09-14T16:55:01.453343Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1531,"took":"51.605059ms","hash":2194584676,"current-db-size-bytes":6504448,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3567616,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-14T16:55:01.453889Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2194584676,"revision":1531,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T16:55:06.791539Z","caller":"traceutil/trace.go:171","msg":"trace[1480773213] linearizableReadLoop","detail":"{readStateIndex:2215; appliedIndex:2214; }","duration":"121.517894ms","start":"2024-09-14T16:55:06.669994Z","end":"2024-09-14T16:55:06.791512Z","steps":["trace[1480773213] 'read index received'  (duration: 121.338219ms)","trace[1480773213] 'applied index is now lower than readState.Index'  (duration: 179.198µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T16:55:06.791772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.728846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:5015"}
	{"level":"info","ts":"2024-09-14T16:55:06.791805Z","caller":"traceutil/trace.go:171","msg":"trace[783623875] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:2062; }","duration":"121.808688ms","start":"2024-09-14T16:55:06.669990Z","end":"2024-09-14T16:55:06.791799Z","steps":["trace[783623875] 'agreement among raft nodes before linearized reading'  (duration: 121.605717ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:06.792045Z","caller":"traceutil/trace.go:171","msg":"trace[1054613937] transaction","detail":"{read_only:false; response_revision:2062; number_of_response:1; }","duration":"147.610829ms","start":"2024-09-14T16:55:06.644423Z","end":"2024-09-14T16:55:06.792034Z","steps":["trace[1054613937] 'process raft request'  (duration: 146.958073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:55:10.248913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.394646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:55:10.249062Z","caller":"traceutil/trace.go:171","msg":"trace[2140105305] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2098; }","duration":"171.569728ms","start":"2024-09-14T16:55:10.077481Z","end":"2024-09-14T16:55:10.249051Z","steps":["trace[2140105305] 'agreement among raft nodes before linearized reading'  (duration: 171.37003ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:10.248791Z","caller":"traceutil/trace.go:171","msg":"trace[1932753122] linearizableReadLoop","detail":"{readStateIndex:2253; appliedIndex:2252; }","duration":"171.2342ms","start":"2024-09-14T16:55:10.077485Z","end":"2024-09-14T16:55:10.248719Z","steps":["trace[1932753122] 'read index received'  (duration: 72.664081ms)","trace[1932753122] 'applied index is now lower than readState.Index'  (duration: 98.569685ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T16:56:09.031263Z","caller":"traceutil/trace.go:171","msg":"trace[968631491] transaction","detail":"{read_only:false; response_revision:2462; number_of_response:1; }","duration":"194.954715ms","start":"2024-09-14T16:56:08.836251Z","end":"2024-09-14T16:56:09.031206Z","steps":["trace[968631491] 'process raft request'  (duration: 194.591625ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:56:14.161056Z","caller":"traceutil/trace.go:171","msg":"trace[1645145771] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2492; }","duration":"117.987268ms","start":"2024-09-14T16:56:14.043018Z","end":"2024-09-14T16:56:14.161005Z","steps":["trace[1645145771] 'process raft request'  (duration: 117.843678ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:56:39.509607Z","caller":"traceutil/trace.go:171","msg":"trace[1438413927] linearizableReadLoop","detail":"{readStateIndex:2741; appliedIndex:2740; }","duration":"218.528286ms","start":"2024-09-14T16:56:39.291061Z","end":"2024-09-14T16:56:39.509590Z","steps":["trace[1438413927] 'read index received'  (duration: 218.376973ms)","trace[1438413927] 'applied index is now lower than readState.Index'  (duration: 150.861µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T16:56:39.509718Z","caller":"traceutil/trace.go:171","msg":"trace[559074960] transaction","detail":"{read_only:false; response_revision:2557; number_of_response:1; }","duration":"218.760615ms","start":"2024-09-14T16:56:39.290951Z","end":"2024-09-14T16:56:39.509711Z","steps":["trace[559074960] 'process raft request'  (duration: 218.52238ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:56:39.509952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.330431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:56:39.511494Z","caller":"traceutil/trace.go:171","msg":"trace[1897118407] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2557; }","duration":"201.920066ms","start":"2024-09-14T16:56:39.309561Z","end":"2024-09-14T16:56:39.511481Z","steps":["trace[1897118407] 'agreement among raft nodes before linearized reading'  (duration: 200.2988ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:56:39.510033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.954739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-14T16:56:39.511680Z","caller":"traceutil/trace.go:171","msg":"trace[1514496080] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:2557; }","duration":"220.581328ms","start":"2024-09-14T16:56:39.291058Z","end":"2024-09-14T16:56:39.511639Z","steps":["trace[1514496080] 'agreement among raft nodes before linearized reading'  (duration: 218.930568ms)"],"step_count":1}
	
	
	==> gcp-auth [b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188] <==
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:54:48 Ready to marshal response ...
	2024/09/14 16:54:48 Ready to write response ...
	2024/09/14 16:54:48 Ready to marshal response ...
	2024/09/14 16:54:48 Ready to write response ...
	2024/09/14 16:54:58 Ready to marshal response ...
	2024/09/14 16:54:58 Ready to write response ...
	2024/09/14 16:54:59 Ready to marshal response ...
	2024/09/14 16:54:59 Ready to write response ...
	2024/09/14 16:55:00 Ready to marshal response ...
	2024/09/14 16:55:00 Ready to write response ...
	2024/09/14 16:55:02 Ready to marshal response ...
	2024/09/14 16:55:02 Ready to write response ...
	2024/09/14 16:55:27 Ready to marshal response ...
	2024/09/14 16:55:27 Ready to write response ...
	2024/09/14 16:55:45 Ready to marshal response ...
	2024/09/14 16:55:45 Ready to write response ...
	2024/09/14 16:56:03 Ready to marshal response ...
	2024/09/14 16:56:03 Ready to write response ...
	2024/09/14 16:56:03 Ready to marshal response ...
	2024/09/14 16:56:03 Ready to write response ...
	2024/09/14 16:56:03 Ready to marshal response ...
	2024/09/14 16:56:03 Ready to write response ...
	2024/09/14 16:58:08 Ready to marshal response ...
	2024/09/14 16:58:08 Ready to write response ...
	
	
	==> kernel <==
	 16:58:19 up 13 min,  0 users,  load average: 0.45, 0.70, 0.57
	Linux addons-996992 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2] <==
	E0914 16:47:05.677339       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.47.80:443: connect: connection refused" logger="UnhandledError"
	E0914 16:47:05.687323       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.47.80:443: connect: connection refused" logger="UnhandledError"
	I0914 16:47:05.827073       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 16:55:17.021558       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0914 16:55:17.639974       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:45.238956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.239007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.263430       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.263481       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.291930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.291979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.299265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.299310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.371396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.371502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.847374       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0914 16:55:46.052267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.33.252"}
	W0914 16:55:46.300335       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 16:55:46.379369       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 16:55:46.416041       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:51.311005       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0914 16:55:52.345185       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0914 16:56:03.180773       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.186.11"}
	I0914 16:58:08.607446       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.24.185"}
	E0914 16:58:10.257344       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5] <==
	E0914 16:56:51.125386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:56:53.899520       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:56:53.899578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:02.192160       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:02.192283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:11.237902       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:11.238008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:36.658141       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:36.658324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:47.920640       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:47.920772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:52.079392       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:52.079441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:52.959863       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:52.959918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:58:08.424810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.02781ms"
	I0914 16:58:08.444162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.233322ms"
	I0914 16:58:08.444332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="87.625µs"
	I0914 16:58:08.444442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.83µs"
	I0914 16:58:08.447640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.35µs"
	I0914 16:58:10.867595       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0914 16:58:10.871312       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.896µs"
	I0914 16:58:10.897754       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0914 16:58:12.261618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.173983ms"
	I0914 16:58:12.261774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.57µs"
	
	
	==> kube-proxy [7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 16:45:15.590221       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 16:45:15.599785       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	E0914 16:45:15.599893       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 16:45:15.658278       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 16:45:15.658320       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 16:45:15.658346       1 server_linux.go:169] "Using iptables Proxier"
	I0914 16:45:15.663334       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 16:45:15.663614       1 server.go:483] "Version info" version="v1.31.1"
	I0914 16:45:15.663626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 16:45:15.666732       1 config.go:199] "Starting service config controller"
	I0914 16:45:15.666758       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 16:45:15.666776       1 config.go:105] "Starting endpoint slice config controller"
	I0914 16:45:15.666780       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 16:45:15.667288       1 config.go:328] "Starting node config controller"
	I0914 16:45:15.667296       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 16:45:15.768165       1 shared_informer.go:320] Caches are synced for node config
	I0914 16:45:15.768221       1 shared_informer.go:320] Caches are synced for service config
	I0914 16:45:15.768261       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d] <==
	W0914 16:45:03.820736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 16:45:03.820857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.832104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 16:45:03.832138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.843716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:03.843762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.866418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 16:45:03.866491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.875513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 16:45:03.875608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.916659       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 16:45:03.917144       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 16:45:03.954059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 16:45:03.954146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.032670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:04.032716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.080506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 16:45:04.080598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.114758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 16:45:04.115807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.126730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 16:45:04.126899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.178995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:04.179383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0914 16:45:06.562975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 16:58:08 addons-996992 kubelet[1212]: I0914 16:58:08.451996    1212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/68e73e62-5b8c-43a1-b47c-fe3aac3fc269-gcp-creds\") pod \"hello-world-app-55bf9c44b4-lf7nc\" (UID: \"68e73e62-5b8c-43a1-b47c-fe3aac3fc269\") " pod="default/hello-world-app-55bf9c44b4-lf7nc"
	Sep 14 16:58:08 addons-996992 kubelet[1212]: I0914 16:58:08.452062    1212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qx4s\" (UniqueName: \"kubernetes.io/projected/68e73e62-5b8c-43a1-b47c-fe3aac3fc269-kube-api-access-4qx4s\") pod \"hello-world-app-55bf9c44b4-lf7nc\" (UID: \"68e73e62-5b8c-43a1-b47c-fe3aac3fc269\") " pod="default/hello-world-app-55bf9c44b4-lf7nc"
	Sep 14 16:58:09 addons-996992 kubelet[1212]: I0914 16:58:09.662895    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd6j9\" (UniqueName: \"kubernetes.io/projected/9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18-kube-api-access-vd6j9\") pod \"9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18\" (UID: \"9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18\") "
	Sep 14 16:58:09 addons-996992 kubelet[1212]: I0914 16:58:09.664940    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18-kube-api-access-vd6j9" (OuterVolumeSpecName: "kube-api-access-vd6j9") pod "9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18" (UID: "9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18"). InnerVolumeSpecName "kube-api-access-vd6j9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:58:09 addons-996992 kubelet[1212]: I0914 16:58:09.763601    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vd6j9\" (UniqueName: \"kubernetes.io/projected/9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18-kube-api-access-vd6j9\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:58:10 addons-996992 kubelet[1212]: I0914 16:58:10.224539    1212 scope.go:117] "RemoveContainer" containerID="4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4"
	Sep 14 16:58:10 addons-996992 kubelet[1212]: I0914 16:58:10.251318    1212 scope.go:117] "RemoveContainer" containerID="4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4"
	Sep 14 16:58:10 addons-996992 kubelet[1212]: E0914 16:58:10.252020    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4\": container with ID starting with 4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4 not found: ID does not exist" containerID="4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4"
	Sep 14 16:58:10 addons-996992 kubelet[1212]: I0914 16:58:10.252061    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4"} err="failed to get container status \"4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4\": rpc error: code = NotFound desc = could not find container \"4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4\": container with ID starting with 4617d458fc0fa49e9385acca016dcbf772360248792914098be67dcd9bdd1ee4 not found: ID does not exist"
	Sep 14 16:58:11 addons-996992 kubelet[1212]: I0914 16:58:11.609550    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f1a90d-ea04-408a-a27a-bd202e3b8875" path="/var/lib/kubelet/pods/60f1a90d-ea04-408a-a27a-bd202e3b8875/volumes"
	Sep 14 16:58:11 addons-996992 kubelet[1212]: I0914 16:58:11.609983    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9dfddf-322e-4827-aabc-f4ce5421023d" path="/var/lib/kubelet/pods/6f9dfddf-322e-4827-aabc-f4ce5421023d/volumes"
	Sep 14 16:58:11 addons-996992 kubelet[1212]: I0914 16:58:11.610392    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18" path="/var/lib/kubelet/pods/9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18/volumes"
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.095402    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7be8055-0e55-4f2c-8b12-4eb662eb1f12-webhook-cert\") pod \"d7be8055-0e55-4f2c-8b12-4eb662eb1f12\" (UID: \"d7be8055-0e55-4f2c-8b12-4eb662eb1f12\") "
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.095454    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82bls\" (UniqueName: \"kubernetes.io/projected/d7be8055-0e55-4f2c-8b12-4eb662eb1f12-kube-api-access-82bls\") pod \"d7be8055-0e55-4f2c-8b12-4eb662eb1f12\" (UID: \"d7be8055-0e55-4f2c-8b12-4eb662eb1f12\") "
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.099059    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be8055-0e55-4f2c-8b12-4eb662eb1f12-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d7be8055-0e55-4f2c-8b12-4eb662eb1f12" (UID: "d7be8055-0e55-4f2c-8b12-4eb662eb1f12"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.100730    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7be8055-0e55-4f2c-8b12-4eb662eb1f12-kube-api-access-82bls" (OuterVolumeSpecName: "kube-api-access-82bls") pod "d7be8055-0e55-4f2c-8b12-4eb662eb1f12" (UID: "d7be8055-0e55-4f2c-8b12-4eb662eb1f12"). InnerVolumeSpecName "kube-api-access-82bls". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.196594    1212 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7be8055-0e55-4f2c-8b12-4eb662eb1f12-webhook-cert\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.196633    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-82bls\" (UniqueName: \"kubernetes.io/projected/d7be8055-0e55-4f2c-8b12-4eb662eb1f12-kube-api-access-82bls\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.252866    1212 scope.go:117] "RemoveContainer" containerID="dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9"
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.272459    1212 scope.go:117] "RemoveContainer" containerID="dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9"
	Sep 14 16:58:14 addons-996992 kubelet[1212]: E0914 16:58:14.272898    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9\": container with ID starting with dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9 not found: ID does not exist" containerID="dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9"
	Sep 14 16:58:14 addons-996992 kubelet[1212]: I0914 16:58:14.272939    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9"} err="failed to get container status \"dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9\": rpc error: code = NotFound desc = could not find container \"dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9\": container with ID starting with dc19f66cd0016b566ce0b53728eeefd352d3b2bfd9d0ed9494d6f451c2a389b9 not found: ID does not exist"
	Sep 14 16:58:15 addons-996992 kubelet[1212]: I0914 16:58:15.610387    1212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7be8055-0e55-4f2c-8b12-4eb662eb1f12" path="/var/lib/kubelet/pods/d7be8055-0e55-4f2c-8b12-4eb662eb1f12/volumes"
	Sep 14 16:58:15 addons-996992 kubelet[1212]: E0914 16:58:15.977156    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333095976887686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 16:58:15 addons-996992 kubelet[1212]: E0914 16:58:15.977224    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333095976887686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a] <==
	I0914 16:45:18.537690       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 16:45:18.556796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 16:45:18.556868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 16:45:18.586989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 16:45:18.587718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89c4a434-eabc-4a8a-9f14-9375f68755f8", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d became leader
	I0914 16:45:18.587761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d!
	I0914 16:45:18.789501       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-996992 -n addons-996992
helpers_test.go:261: (dbg) Run:  kubectl --context addons-996992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-996992 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-996992 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-996992/192.168.39.189
	Start Time:       Sat, 14 Sep 2024 16:46:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtsq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6dtsq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-996992
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m48s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    88s (x42 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.39s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (323.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.839847ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007298763s
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (70.574016ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m3.192001934s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (70.32345ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m6.379448893s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (62.19218ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m11.281459058s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (70.615672ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m17.834054622s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (64.220319ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m24.491962972s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (66.771844ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m39.677027665s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (168.272547ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 10m59.079977687s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (63.634764ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 11m26.122192927s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (66.456449ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 11m55.525415707s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (61.009935ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 13m23.327166032s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (75.291358ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 13m59.617261202s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-996992 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-996992 top pods -n kube-system: exit status 1 (64.29074ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9p6z9, age: 15m17.850468002s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-996992 -n addons-996992
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 logs -n 25: (1.311860186s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-119677                                                                     | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-357716                                                                     | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-539617 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-539617                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35769                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-539617                                                                     | binary-mirror-539617 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| addons  | disable dashboard -p                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-996992 --wait=true                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:54 UTC | 14 Sep 24 16:54 UTC |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-996992 ssh cat                                                                       | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | /opt/local-path-provisioner/pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | addons-996992                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-996992 ssh curl -s                                                                   | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-996992 ip                                                                            | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | -p addons-996992                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | -p addons-996992                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-996992 ip                                                                            | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:58 UTC | 14 Sep 24 16:58 UTC |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:58 UTC | 14 Sep 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-996992 addons disable                                                                | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 16:58 UTC | 14 Sep 24 16:58 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-996992 addons                                                                        | addons-996992        | jenkins | v1.34.0 | 14 Sep 24 17:00 UTC | 14 Sep 24 17:00 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:44:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:44:27.658554   16725 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:44:27.659049   16725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:27.659100   16725 out.go:358] Setting ErrFile to fd 2...
	I0914 16:44:27.659118   16725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:27.659608   16725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 16:44:27.660666   16725 out.go:352] Setting JSON to false
	I0914 16:44:27.661546   16725 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1612,"bootTime":1726330656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:44:27.661646   16725 start.go:139] virtualization: kvm guest
	I0914 16:44:27.663699   16725 out.go:177] * [addons-996992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 16:44:27.665028   16725 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 16:44:27.665051   16725 notify.go:220] Checking for updates...
	I0914 16:44:27.667815   16725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:44:27.669277   16725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:44:27.670590   16725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:27.671878   16725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 16:44:27.673058   16725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 16:44:27.674650   16725 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:44:27.706805   16725 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 16:44:27.708321   16725 start.go:297] selected driver: kvm2
	I0914 16:44:27.708336   16725 start.go:901] validating driver "kvm2" against <nil>
	I0914 16:44:27.708348   16725 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 16:44:27.709072   16725 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:27.709158   16725 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 16:44:27.723953   16725 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 16:44:27.724008   16725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:44:27.724241   16725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:44:27.724270   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:44:27.724306   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:27.724316   16725 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:44:27.724367   16725 start.go:340] cluster config:
	{Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:27.724463   16725 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:27.726351   16725 out.go:177] * Starting "addons-996992" primary control-plane node in "addons-996992" cluster
	I0914 16:44:27.727435   16725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:27.727477   16725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 16:44:27.727486   16725 cache.go:56] Caching tarball of preloaded images
	I0914 16:44:27.727583   16725 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 16:44:27.727595   16725 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 16:44:27.727895   16725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json ...
	I0914 16:44:27.727914   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json: {Name:mk5b5d945e87f410628fe80d3ffbea824c8cc516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:27.728052   16725 start.go:360] acquireMachinesLock for addons-996992: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 16:44:27.728097   16725 start.go:364] duration metric: took 32.087µs to acquireMachinesLock for "addons-996992"
	I0914 16:44:27.728117   16725 start.go:93] Provisioning new machine with config: &{Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 16:44:27.728170   16725 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 16:44:27.730533   16725 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0914 16:44:27.730741   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:44:27.730798   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:44:27.745035   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0914 16:44:27.745492   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:44:27.746094   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:44:27.746115   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:44:27.746439   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:44:27.746641   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:27.746794   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:27.746933   16725 start.go:159] libmachine.API.Create for "addons-996992" (driver="kvm2")
	I0914 16:44:27.746958   16725 client.go:168] LocalClient.Create starting
	I0914 16:44:27.746993   16725 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 16:44:27.859328   16725 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 16:44:27.966294   16725 main.go:141] libmachine: Running pre-create checks...
	I0914 16:44:27.966316   16725 main.go:141] libmachine: (addons-996992) Calling .PreCreateCheck
	I0914 16:44:27.966771   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:27.967192   16725 main.go:141] libmachine: Creating machine...
	I0914 16:44:27.967205   16725 main.go:141] libmachine: (addons-996992) Calling .Create
	I0914 16:44:27.967357   16725 main.go:141] libmachine: (addons-996992) Creating KVM machine...
	I0914 16:44:27.968635   16725 main.go:141] libmachine: (addons-996992) DBG | found existing default KVM network
	I0914 16:44:27.969364   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:27.969186   16746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0914 16:44:27.969389   16725 main.go:141] libmachine: (addons-996992) DBG | created network xml: 
	I0914 16:44:27.969403   16725 main.go:141] libmachine: (addons-996992) DBG | <network>
	I0914 16:44:27.969414   16725 main.go:141] libmachine: (addons-996992) DBG |   <name>mk-addons-996992</name>
	I0914 16:44:27.969476   16725 main.go:141] libmachine: (addons-996992) DBG |   <dns enable='no'/>
	I0914 16:44:27.969509   16725 main.go:141] libmachine: (addons-996992) DBG |   
	I0914 16:44:27.969524   16725 main.go:141] libmachine: (addons-996992) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0914 16:44:27.969537   16725 main.go:141] libmachine: (addons-996992) DBG |     <dhcp>
	I0914 16:44:27.969546   16725 main.go:141] libmachine: (addons-996992) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0914 16:44:27.969553   16725 main.go:141] libmachine: (addons-996992) DBG |     </dhcp>
	I0914 16:44:27.969560   16725 main.go:141] libmachine: (addons-996992) DBG |   </ip>
	I0914 16:44:27.969567   16725 main.go:141] libmachine: (addons-996992) DBG |   
	I0914 16:44:27.969572   16725 main.go:141] libmachine: (addons-996992) DBG | </network>
	I0914 16:44:27.969578   16725 main.go:141] libmachine: (addons-996992) DBG | 
	I0914 16:44:27.975466   16725 main.go:141] libmachine: (addons-996992) DBG | trying to create private KVM network mk-addons-996992 192.168.39.0/24...
	I0914 16:44:28.040012   16725 main.go:141] libmachine: (addons-996992) DBG | private KVM network mk-addons-996992 192.168.39.0/24 created
	I0914 16:44:28.040038   16725 main.go:141] libmachine: (addons-996992) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 ...
	I0914 16:44:28.040051   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.039977   16746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:28.040070   16725 main.go:141] libmachine: (addons-996992) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 16:44:28.040122   16725 main.go:141] libmachine: (addons-996992) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 16:44:28.289089   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.288934   16746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa...
	I0914 16:44:28.557850   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.557726   16746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/addons-996992.rawdisk...
	I0914 16:44:28.557884   16725 main.go:141] libmachine: (addons-996992) DBG | Writing magic tar header
	I0914 16:44:28.557899   16725 main.go:141] libmachine: (addons-996992) DBG | Writing SSH key tar header
	I0914 16:44:28.557913   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:28.557851   16746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 ...
	I0914 16:44:28.557943   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992
	I0914 16:44:28.557987   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992 (perms=drwx------)
	I0914 16:44:28.558007   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 16:44:28.558018   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 16:44:28.558031   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:28.558047   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 16:44:28.558057   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 16:44:28.558068   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 16:44:28.558078   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 16:44:28.558086   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 16:44:28.558098   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home/jenkins
	I0914 16:44:28.558109   16725 main.go:141] libmachine: (addons-996992) DBG | Checking permissions on dir: /home
	I0914 16:44:28.558118   16725 main.go:141] libmachine: (addons-996992) DBG | Skipping /home - not owner
	I0914 16:44:28.558148   16725 main.go:141] libmachine: (addons-996992) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 16:44:28.558185   16725 main.go:141] libmachine: (addons-996992) Creating domain...
	I0914 16:44:28.559360   16725 main.go:141] libmachine: (addons-996992) define libvirt domain using xml: 
	I0914 16:44:28.559383   16725 main.go:141] libmachine: (addons-996992) <domain type='kvm'>
	I0914 16:44:28.559393   16725 main.go:141] libmachine: (addons-996992)   <name>addons-996992</name>
	I0914 16:44:28.559399   16725 main.go:141] libmachine: (addons-996992)   <memory unit='MiB'>4000</memory>
	I0914 16:44:28.559405   16725 main.go:141] libmachine: (addons-996992)   <vcpu>2</vcpu>
	I0914 16:44:28.559409   16725 main.go:141] libmachine: (addons-996992)   <features>
	I0914 16:44:28.559414   16725 main.go:141] libmachine: (addons-996992)     <acpi/>
	I0914 16:44:28.559420   16725 main.go:141] libmachine: (addons-996992)     <apic/>
	I0914 16:44:28.559425   16725 main.go:141] libmachine: (addons-996992)     <pae/>
	I0914 16:44:28.559431   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559437   16725 main.go:141] libmachine: (addons-996992)   </features>
	I0914 16:44:28.559443   16725 main.go:141] libmachine: (addons-996992)   <cpu mode='host-passthrough'>
	I0914 16:44:28.559448   16725 main.go:141] libmachine: (addons-996992)   
	I0914 16:44:28.559462   16725 main.go:141] libmachine: (addons-996992)   </cpu>
	I0914 16:44:28.559469   16725 main.go:141] libmachine: (addons-996992)   <os>
	I0914 16:44:28.559475   16725 main.go:141] libmachine: (addons-996992)     <type>hvm</type>
	I0914 16:44:28.559489   16725 main.go:141] libmachine: (addons-996992)     <boot dev='cdrom'/>
	I0914 16:44:28.559500   16725 main.go:141] libmachine: (addons-996992)     <boot dev='hd'/>
	I0914 16:44:28.559505   16725 main.go:141] libmachine: (addons-996992)     <bootmenu enable='no'/>
	I0914 16:44:28.559525   16725 main.go:141] libmachine: (addons-996992)   </os>
	I0914 16:44:28.559531   16725 main.go:141] libmachine: (addons-996992)   <devices>
	I0914 16:44:28.559537   16725 main.go:141] libmachine: (addons-996992)     <disk type='file' device='cdrom'>
	I0914 16:44:28.559545   16725 main.go:141] libmachine: (addons-996992)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/boot2docker.iso'/>
	I0914 16:44:28.559550   16725 main.go:141] libmachine: (addons-996992)       <target dev='hdc' bus='scsi'/>
	I0914 16:44:28.559555   16725 main.go:141] libmachine: (addons-996992)       <readonly/>
	I0914 16:44:28.559560   16725 main.go:141] libmachine: (addons-996992)     </disk>
	I0914 16:44:28.559567   16725 main.go:141] libmachine: (addons-996992)     <disk type='file' device='disk'>
	I0914 16:44:28.559574   16725 main.go:141] libmachine: (addons-996992)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 16:44:28.559584   16725 main.go:141] libmachine: (addons-996992)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/addons-996992.rawdisk'/>
	I0914 16:44:28.559589   16725 main.go:141] libmachine: (addons-996992)       <target dev='hda' bus='virtio'/>
	I0914 16:44:28.559595   16725 main.go:141] libmachine: (addons-996992)     </disk>
	I0914 16:44:28.559604   16725 main.go:141] libmachine: (addons-996992)     <interface type='network'>
	I0914 16:44:28.559614   16725 main.go:141] libmachine: (addons-996992)       <source network='mk-addons-996992'/>
	I0914 16:44:28.559622   16725 main.go:141] libmachine: (addons-996992)       <model type='virtio'/>
	I0914 16:44:28.559630   16725 main.go:141] libmachine: (addons-996992)     </interface>
	I0914 16:44:28.559636   16725 main.go:141] libmachine: (addons-996992)     <interface type='network'>
	I0914 16:44:28.559648   16725 main.go:141] libmachine: (addons-996992)       <source network='default'/>
	I0914 16:44:28.559656   16725 main.go:141] libmachine: (addons-996992)       <model type='virtio'/>
	I0914 16:44:28.559660   16725 main.go:141] libmachine: (addons-996992)     </interface>
	I0914 16:44:28.559667   16725 main.go:141] libmachine: (addons-996992)     <serial type='pty'>
	I0914 16:44:28.559674   16725 main.go:141] libmachine: (addons-996992)       <target port='0'/>
	I0914 16:44:28.559684   16725 main.go:141] libmachine: (addons-996992)     </serial>
	I0914 16:44:28.559695   16725 main.go:141] libmachine: (addons-996992)     <console type='pty'>
	I0914 16:44:28.559713   16725 main.go:141] libmachine: (addons-996992)       <target type='serial' port='0'/>
	I0914 16:44:28.559728   16725 main.go:141] libmachine: (addons-996992)     </console>
	I0914 16:44:28.559768   16725 main.go:141] libmachine: (addons-996992)     <rng model='virtio'>
	I0914 16:44:28.559789   16725 main.go:141] libmachine: (addons-996992)       <backend model='random'>/dev/random</backend>
	I0914 16:44:28.559798   16725 main.go:141] libmachine: (addons-996992)     </rng>
	I0914 16:44:28.559805   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559810   16725 main.go:141] libmachine: (addons-996992)     
	I0914 16:44:28.559815   16725 main.go:141] libmachine: (addons-996992)   </devices>
	I0914 16:44:28.559820   16725 main.go:141] libmachine: (addons-996992) </domain>
	I0914 16:44:28.559826   16725 main.go:141] libmachine: (addons-996992) 
	I0914 16:44:28.565929   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:0d:74:be in network default
	I0914 16:44:28.566532   16725 main.go:141] libmachine: (addons-996992) Ensuring networks are active...
	I0914 16:44:28.566561   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:28.567152   16725 main.go:141] libmachine: (addons-996992) Ensuring network default is active
	I0914 16:44:28.567386   16725 main.go:141] libmachine: (addons-996992) Ensuring network mk-addons-996992 is active
	I0914 16:44:28.567808   16725 main.go:141] libmachine: (addons-996992) Getting domain xml...
	I0914 16:44:28.568374   16725 main.go:141] libmachine: (addons-996992) Creating domain...
	I0914 16:44:30.007186   16725 main.go:141] libmachine: (addons-996992) Waiting to get IP...
	I0914 16:44:30.007842   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.008313   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.008349   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.008249   16746 retry.go:31] will retry after 193.278123ms: waiting for machine to come up
	I0914 16:44:30.203743   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.204360   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.204412   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.204193   16746 retry.go:31] will retry after 245.945466ms: waiting for machine to come up
	I0914 16:44:30.451736   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.452098   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.452129   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.452044   16746 retry.go:31] will retry after 422.043703ms: waiting for machine to come up
	I0914 16:44:30.875457   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:30.875934   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:30.875960   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:30.875878   16746 retry.go:31] will retry after 473.34114ms: waiting for machine to come up
	I0914 16:44:31.350215   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:31.350612   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:31.350631   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:31.350576   16746 retry.go:31] will retry after 628.442164ms: waiting for machine to come up
	I0914 16:44:31.980705   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:31.981327   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:31.981357   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:31.981288   16746 retry.go:31] will retry after 929.748342ms: waiting for machine to come up
	I0914 16:44:32.912801   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:32.913219   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:32.913246   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:32.913169   16746 retry.go:31] will retry after 956.954722ms: waiting for machine to come up
	I0914 16:44:33.871239   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:33.871624   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:33.871655   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:33.871611   16746 retry.go:31] will retry after 1.433739833s: waiting for machine to come up
	I0914 16:44:35.307302   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:35.307687   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:35.307721   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:35.307633   16746 retry.go:31] will retry after 1.515973944s: waiting for machine to come up
	I0914 16:44:36.826018   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:36.826451   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:36.826473   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:36.826405   16746 retry.go:31] will retry after 1.946747568s: waiting for machine to come up
	I0914 16:44:38.775169   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:38.775648   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:38.775676   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:38.775602   16746 retry.go:31] will retry after 2.771653383s: waiting for machine to come up
	I0914 16:44:41.550519   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:41.550927   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:41.550947   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:41.550892   16746 retry.go:31] will retry after 2.637789254s: waiting for machine to come up
	I0914 16:44:44.190450   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:44.190859   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find current IP address of domain addons-996992 in network mk-addons-996992
	I0914 16:44:44.190881   16725 main.go:141] libmachine: (addons-996992) DBG | I0914 16:44:44.190814   16746 retry.go:31] will retry after 3.734364168s: waiting for machine to come up
	I0914 16:44:47.926668   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:47.927158   16725 main.go:141] libmachine: (addons-996992) Found IP for machine: 192.168.39.189
	I0914 16:44:47.927179   16725 main.go:141] libmachine: (addons-996992) Reserving static IP address...
	I0914 16:44:47.927192   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has current primary IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:47.927576   16725 main.go:141] libmachine: (addons-996992) DBG | unable to find host DHCP lease matching {name: "addons-996992", mac: "52:54:00:dd:8c:90", ip: "192.168.39.189"} in network mk-addons-996992
	I0914 16:44:48.085073   16725 main.go:141] libmachine: (addons-996992) DBG | Getting to WaitForSSH function...
	I0914 16:44:48.085105   16725 main.go:141] libmachine: (addons-996992) Reserved static IP address: 192.168.39.189
	I0914 16:44:48.085119   16725 main.go:141] libmachine: (addons-996992) Waiting for SSH to be available...
	I0914 16:44:48.087828   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.088171   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.088203   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.088326   16725 main.go:141] libmachine: (addons-996992) DBG | Using SSH client type: external
	I0914 16:44:48.088342   16725 main.go:141] libmachine: (addons-996992) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa (-rw-------)
	I0914 16:44:48.088390   16725 main.go:141] libmachine: (addons-996992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 16:44:48.088422   16725 main.go:141] libmachine: (addons-996992) DBG | About to run SSH command:
	I0914 16:44:48.088437   16725 main.go:141] libmachine: (addons-996992) DBG | exit 0
	I0914 16:44:48.222175   16725 main.go:141] libmachine: (addons-996992) DBG | SSH cmd err, output: <nil>: 
	I0914 16:44:48.222479   16725 main.go:141] libmachine: (addons-996992) KVM machine creation complete!
	I0914 16:44:48.222803   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:48.250845   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:48.251150   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:48.251340   16725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 16:44:48.251369   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:44:48.253045   16725 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 16:44:48.253064   16725 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 16:44:48.253072   16725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 16:44:48.253081   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.255661   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.256049   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.256068   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.256226   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.256426   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.256654   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.256795   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.256982   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.257155   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.257164   16725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 16:44:48.365411   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:48.365433   16725 main.go:141] libmachine: Detecting the provisioner...
	I0914 16:44:48.365440   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.368483   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.368906   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.368927   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.369091   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.369277   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.369448   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.369560   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.369706   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.369917   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.369928   16725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 16:44:48.478560   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 16:44:48.478635   16725 main.go:141] libmachine: found compatible host: buildroot
	I0914 16:44:48.478650   16725 main.go:141] libmachine: Provisioning with buildroot...
	I0914 16:44:48.478673   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.478938   16725 buildroot.go:166] provisioning hostname "addons-996992"
	I0914 16:44:48.478968   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.479154   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.481754   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.482027   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.482055   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.482238   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.482421   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.482594   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.482715   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.482893   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.483075   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.483090   16725 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-996992 && echo "addons-996992" | sudo tee /etc/hostname
	I0914 16:44:48.603822   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-996992
	
	I0914 16:44:48.603851   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.606556   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.606910   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.606934   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.607103   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.607290   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.607488   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.607658   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.607848   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.608066   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.608093   16725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-996992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-996992/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-996992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 16:44:48.722348   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:48.722378   16725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 16:44:48.722396   16725 buildroot.go:174] setting up certificates
	I0914 16:44:48.722422   16725 provision.go:84] configureAuth start
	I0914 16:44:48.722433   16725 main.go:141] libmachine: (addons-996992) Calling .GetMachineName
	I0914 16:44:48.722689   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:48.725429   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.725795   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.725827   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.725999   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.728098   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.728440   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.728459   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.728608   16725 provision.go:143] copyHostCerts
	I0914 16:44:48.728683   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 16:44:48.728797   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 16:44:48.728852   16725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 16:44:48.728919   16725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.addons-996992 san=[127.0.0.1 192.168.39.189 addons-996992 localhost minikube]
	I0914 16:44:48.792378   16725 provision.go:177] copyRemoteCerts
	I0914 16:44:48.792464   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 16:44:48.792493   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.795239   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.795658   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.795697   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.795972   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.796149   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.796365   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.796523   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:48.880497   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 16:44:48.905386   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 16:44:48.927284   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 16:44:48.949470   16725 provision.go:87] duration metric: took 227.034076ms to configureAuth
	I0914 16:44:48.949496   16725 buildroot.go:189] setting minikube options for container-runtime
	I0914 16:44:48.949667   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:44:48.949749   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:48.952388   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.952770   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:48.952792   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:48.953000   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:48.953189   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.953319   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:48.953445   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:48.953626   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:48.953785   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:48.953798   16725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 16:44:49.180693   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 16:44:49.180719   16725 main.go:141] libmachine: Checking connection to Docker...
	I0914 16:44:49.180727   16725 main.go:141] libmachine: (addons-996992) Calling .GetURL
	I0914 16:44:49.182000   16725 main.go:141] libmachine: (addons-996992) DBG | Using libvirt version 6000000
	I0914 16:44:49.184271   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.184718   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.184747   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.184859   16725 main.go:141] libmachine: Docker is up and running!
	I0914 16:44:49.184872   16725 main.go:141] libmachine: Reticulating splines...
	I0914 16:44:49.184879   16725 client.go:171] duration metric: took 21.437913259s to LocalClient.Create
	I0914 16:44:49.184951   16725 start.go:167] duration metric: took 21.438013433s to libmachine.API.Create "addons-996992"
	I0914 16:44:49.184967   16725 start.go:293] postStartSetup for "addons-996992" (driver="kvm2")
	I0914 16:44:49.184983   16725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 16:44:49.185012   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.185343   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 16:44:49.185366   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.187583   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.187883   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.187924   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.188038   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.188258   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.188488   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.188629   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.274153   16725 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 16:44:49.278523   16725 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 16:44:49.278558   16725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 16:44:49.278639   16725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 16:44:49.278670   16725 start.go:296] duration metric: took 93.694384ms for postStartSetup
	I0914 16:44:49.278701   16725 main.go:141] libmachine: (addons-996992) Calling .GetConfigRaw
	I0914 16:44:49.279309   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:49.281961   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.282293   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.282334   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.282507   16725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/config.json ...
	I0914 16:44:49.282702   16725 start.go:128] duration metric: took 21.554522556s to createHost
	I0914 16:44:49.282723   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.284816   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.285125   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.285161   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.285299   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.285489   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.285616   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.285768   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.285889   16725 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:49.286051   16725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0914 16:44:49.286060   16725 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 16:44:49.394658   16725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726332289.368573436
	
	I0914 16:44:49.394680   16725 fix.go:216] guest clock: 1726332289.368573436
	I0914 16:44:49.394687   16725 fix.go:229] Guest: 2024-09-14 16:44:49.368573436 +0000 UTC Remote: 2024-09-14 16:44:49.28271319 +0000 UTC m=+21.657617847 (delta=85.860246ms)
	I0914 16:44:49.394705   16725 fix.go:200] guest clock delta is within tolerance: 85.860246ms
	I0914 16:44:49.394710   16725 start.go:83] releasing machines lock for "addons-996992", held for 21.66660282s
	I0914 16:44:49.394730   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.394985   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:49.397445   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.397817   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.397843   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.398094   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398597   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398755   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:44:49.398864   16725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 16:44:49.398917   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.398947   16725 ssh_runner.go:195] Run: cat /version.json
	I0914 16:44:49.398966   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:44:49.401354   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401636   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.401658   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401728   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.401838   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.402091   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.402285   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.402338   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:49.402362   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:49.402400   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.402603   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:44:49.402786   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:44:49.402964   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:44:49.403097   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:44:49.519392   16725 ssh_runner.go:195] Run: systemctl --version
	I0914 16:44:49.525764   16725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 16:44:49.694011   16725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 16:44:49.699486   16725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 16:44:49.699547   16725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 16:44:49.714748   16725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 16:44:49.714768   16725 start.go:495] detecting cgroup driver to use...
	I0914 16:44:49.714822   16725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 16:44:49.729936   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 16:44:49.743531   16725 docker.go:217] disabling cri-docker service (if available) ...
	I0914 16:44:49.743604   16725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 16:44:49.756964   16725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 16:44:49.770590   16725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 16:44:49.893965   16725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 16:44:50.044352   16725 docker.go:233] disabling docker service ...
	I0914 16:44:50.044415   16725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 16:44:50.059044   16725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 16:44:50.073286   16725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 16:44:50.194594   16725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 16:44:50.308467   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 16:44:50.322485   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:50.339320   16725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 16:44:50.339388   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.348795   16725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 16:44:50.348884   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.358384   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.367798   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.377342   16725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 16:44:50.387564   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.397380   16725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.414038   16725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 16:44:50.424719   16725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 16:44:50.433951   16725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 16:44:50.434029   16725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 16:44:50.446639   16725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 16:44:50.456388   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:50.574976   16725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 16:44:50.661035   16725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 16:44:50.661113   16725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 16:44:50.665670   16725 start.go:563] Will wait 60s for crictl version
	I0914 16:44:50.665731   16725 ssh_runner.go:195] Run: which crictl
	I0914 16:44:50.669237   16725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 16:44:50.707163   16725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 16:44:50.707267   16725 ssh_runner.go:195] Run: crio --version
	I0914 16:44:50.732866   16725 ssh_runner.go:195] Run: crio --version
	I0914 16:44:50.760540   16725 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 16:44:50.761520   16725 main.go:141] libmachine: (addons-996992) Calling .GetIP
	I0914 16:44:50.764201   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:50.764600   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:44:50.764627   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:44:50.764836   16725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 16:44:50.768563   16725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:50.780282   16725 kubeadm.go:883] updating cluster {Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 16:44:50.780403   16725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:50.780449   16725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 16:44:50.811100   16725 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 16:44:50.811171   16725 ssh_runner.go:195] Run: which lz4
	I0914 16:44:50.815020   16725 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 16:44:50.818901   16725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 16:44:50.818932   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 16:44:51.986671   16725 crio.go:462] duration metric: took 1.171676547s to copy over tarball
	I0914 16:44:51.986742   16725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 16:44:54.089407   16725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.102639006s)
	I0914 16:44:54.089436   16725 crio.go:469] duration metric: took 2.102736316s to extract the tarball
	I0914 16:44:54.089444   16725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 16:44:54.127982   16725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 16:44:54.168690   16725 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 16:44:54.168718   16725 cache_images.go:84] Images are preloaded, skipping loading
	I0914 16:44:54.168726   16725 kubeadm.go:934] updating node { 192.168.39.189 8443 v1.31.1 crio true true} ...
	I0914 16:44:54.168840   16725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-996992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 16:44:54.168921   16725 ssh_runner.go:195] Run: crio config
	I0914 16:44:54.213151   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:44:54.213177   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:54.213187   16725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 16:44:54.213208   16725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-996992 NodeName:addons-996992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 16:44:54.213406   16725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-996992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 16:44:54.213473   16725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 16:44:54.223204   16725 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 16:44:54.223288   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 16:44:54.233103   16725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0914 16:44:54.248690   16725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 16:44:54.264306   16725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0914 16:44:54.280174   16725 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I0914 16:44:54.283808   16725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:54.295236   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:54.407554   16725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:44:54.423857   16725 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992 for IP: 192.168.39.189
	I0914 16:44:54.423885   16725 certs.go:194] generating shared ca certs ...
	I0914 16:44:54.423899   16725 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.424055   16725 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 16:44:54.653328   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt ...
	I0914 16:44:54.653357   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt: {Name:mk83d7136889857d4ed25b0dba1b2df29c745e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.653511   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key ...
	I0914 16:44:54.653521   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key: {Name:mkf6a9abc7e34a97c99f2a5ec51dc983ba6352f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.653592   16725 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 16:44:54.763073   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt ...
	I0914 16:44:54.763103   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt: {Name:mk4ef09caad655cf68088badaf279bd208978abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.763267   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key ...
	I0914 16:44:54.763279   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key: {Name:mk3a507b5dffcb94432777f7f3e5733be1c0f3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.763357   16725 certs.go:256] generating profile certs ...
	I0914 16:44:54.763409   16725 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key
	I0914 16:44:54.763424   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt with IP's: []
	I0914 16:44:54.910505   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt ...
	I0914 16:44:54.910543   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: {Name:mk09179ed269a97b87aa12bc79284cfddef8c5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.910700   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key ...
	I0914 16:44:54.910712   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.key: {Name:mk74eedc746dd9fd7a750c2f3d02305cb8619c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:54.910777   16725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca
	I0914 16:44:54.910796   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189]
	I0914 16:44:55.208240   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca ...
	I0914 16:44:55.208270   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca: {Name:mka09606e42dd1ecc4ea29944564740a07d14b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.208415   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca ...
	I0914 16:44:55.208427   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca: {Name:mkbcdd45d86dc41d397758dcbac5534936ad83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.208527   16725 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt.75aeddca -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt
	I0914 16:44:55.208613   16725 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key.75aeddca -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key
	I0914 16:44:55.208661   16725 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key
	I0914 16:44:55.208677   16725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt with IP's: []
	I0914 16:44:55.276375   16725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt ...
	I0914 16:44:55.276402   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt: {Name:mkf139a671d75a23c54568782300fb890e1af9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.276575   16725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key ...
	I0914 16:44:55.276588   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key: {Name:mkf3356386ba33ec54d5db11fd3dfe25bd2233d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:55.276748   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 16:44:55.276779   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 16:44:55.276803   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 16:44:55.276825   16725 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 16:44:55.277400   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 16:44:55.303836   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 16:44:55.325577   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 16:44:55.348012   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 16:44:55.371496   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 16:44:55.393703   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 16:44:55.416084   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 16:44:55.438231   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 16:44:55.461207   16725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 16:44:55.484035   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 16:44:55.499790   16725 ssh_runner.go:195] Run: openssl version
	I0914 16:44:55.505113   16725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 16:44:55.515170   16725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.519587   16725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.519665   16725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:55.525286   16725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 16:44:55.535581   16725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 16:44:55.539357   16725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 16:44:55.539419   16725 kubeadm.go:392] StartCluster: {Name:addons-996992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-996992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:55.539594   16725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 16:44:55.539672   16725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 16:44:55.575978   16725 cri.go:89] found id: ""
	I0914 16:44:55.576057   16725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 16:44:55.585788   16725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 16:44:55.595409   16725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 16:44:55.604391   16725 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 16:44:55.604417   16725 kubeadm.go:157] found existing configuration files:
	
	I0914 16:44:55.604464   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 16:44:55.612932   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 16:44:55.613006   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 16:44:55.621580   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 16:44:55.629773   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 16:44:55.629834   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 16:44:55.638432   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 16:44:55.646743   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 16:44:55.646820   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 16:44:55.655625   16725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 16:44:55.663901   16725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 16:44:55.663966   16725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 16:44:55.672657   16725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 16:44:55.725872   16725 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 16:44:55.725960   16725 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 16:44:55.830107   16725 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 16:44:55.830268   16725 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 16:44:55.830418   16725 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 16:44:55.839067   16725 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 16:44:55.872082   16725 out.go:235]   - Generating certificates and keys ...
	I0914 16:44:55.872184   16725 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 16:44:55.872270   16725 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 16:44:56.094669   16725 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 16:44:56.228851   16725 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 16:44:56.361198   16725 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 16:44:56.439341   16725 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 16:44:56.528538   16725 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 16:44:56.528694   16725 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-996992 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0914 16:44:56.706339   16725 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 16:44:56.706543   16725 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-996992 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0914 16:44:56.783275   16725 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 16:44:56.956298   16725 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 16:44:57.088304   16725 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 16:44:57.088427   16725 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 16:44:57.464241   16725 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 16:44:57.635302   16725 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 16:44:57.910383   16725 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 16:44:58.013201   16725 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 16:44:58.248188   16725 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 16:44:58.250774   16725 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 16:44:58.253067   16725 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 16:44:58.254997   16725 out.go:235]   - Booting up control plane ...
	I0914 16:44:58.255104   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 16:44:58.255191   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 16:44:58.255668   16725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 16:44:58.271031   16725 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 16:44:58.280477   16725 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 16:44:58.280530   16725 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 16:44:58.407134   16725 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 16:44:58.407301   16725 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 16:44:58.908397   16725 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.392958ms
	I0914 16:44:58.908509   16725 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 16:45:04.906474   16725 kubeadm.go:310] [api-check] The API server is healthy after 6.002177937s
	I0914 16:45:04.924613   16725 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 16:45:04.939822   16725 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 16:45:04.973453   16725 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 16:45:04.973676   16725 kubeadm.go:310] [mark-control-plane] Marking the node addons-996992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 16:45:04.986235   16725 kubeadm.go:310] [bootstrap-token] Using token: shp2dh.uruxonhtmw8h7ze1
	I0914 16:45:04.987488   16725 out.go:235]   - Configuring RBAC rules ...
	I0914 16:45:04.987689   16725 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 16:45:04.996042   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 16:45:05.007370   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 16:45:05.010610   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 16:45:05.017711   16725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 16:45:05.022294   16725 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 16:45:05.314010   16725 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 16:45:05.751385   16725 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 16:45:06.313096   16725 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 16:45:06.313132   16725 kubeadm.go:310] 
	I0914 16:45:06.313225   16725 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 16:45:06.313238   16725 kubeadm.go:310] 
	I0914 16:45:06.313395   16725 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 16:45:06.313413   16725 kubeadm.go:310] 
	I0914 16:45:06.313440   16725 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 16:45:06.313497   16725 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 16:45:06.313558   16725 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 16:45:06.313572   16725 kubeadm.go:310] 
	I0914 16:45:06.313771   16725 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 16:45:06.313800   16725 kubeadm.go:310] 
	I0914 16:45:06.313867   16725 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 16:45:06.313881   16725 kubeadm.go:310] 
	I0914 16:45:06.313921   16725 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 16:45:06.314006   16725 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 16:45:06.314098   16725 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 16:45:06.314108   16725 kubeadm.go:310] 
	I0914 16:45:06.314233   16725 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 16:45:06.314351   16725 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 16:45:06.314360   16725 kubeadm.go:310] 
	I0914 16:45:06.314447   16725 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token shp2dh.uruxonhtmw8h7ze1 \
	I0914 16:45:06.314568   16725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 16:45:06.314616   16725 kubeadm.go:310] 	--control-plane 
	I0914 16:45:06.314625   16725 kubeadm.go:310] 
	I0914 16:45:06.314722   16725 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 16:45:06.314730   16725 kubeadm.go:310] 
	I0914 16:45:06.314828   16725 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token shp2dh.uruxonhtmw8h7ze1 \
	I0914 16:45:06.314969   16725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 16:45:06.315496   16725 kubeadm.go:310] W0914 16:44:55.704880     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:45:06.315862   16725 kubeadm.go:310] W0914 16:44:55.705784     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:45:06.315978   16725 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 16:45:06.315991   16725 cni.go:84] Creating CNI manager for ""
	I0914 16:45:06.315997   16725 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:45:06.317740   16725 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 16:45:06.319057   16725 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 16:45:06.331920   16725 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 16:45:06.353277   16725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 16:45:06.353350   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:06.353388   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-996992 minikube.k8s.io/updated_at=2024_09_14T16_45_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=addons-996992 minikube.k8s.io/primary=true
	I0914 16:45:06.375471   16725 ops.go:34] apiserver oom_adj: -16
	I0914 16:45:06.504882   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:07.005141   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:07.505774   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:08.005050   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:08.505830   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:09.005575   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:09.505807   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.005492   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.504986   16725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:45:10.621672   16725 kubeadm.go:1113] duration metric: took 4.268383123s to wait for elevateKubeSystemPrivileges
	I0914 16:45:10.621717   16725 kubeadm.go:394] duration metric: took 15.082301818s to StartCluster
	I0914 16:45:10.621740   16725 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:45:10.621915   16725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:45:10.622431   16725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:45:10.622689   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 16:45:10.622711   16725 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 16:45:10.622769   16725 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 16:45:10.622896   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:45:10.622926   16725 addons.go:69] Setting helm-tiller=true in profile "addons-996992"
	I0914 16:45:10.622941   16725 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-996992"
	I0914 16:45:10.622950   16725 addons.go:69] Setting cloud-spanner=true in profile "addons-996992"
	I0914 16:45:10.622957   16725 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-996992"
	I0914 16:45:10.622897   16725 addons.go:69] Setting yakd=true in profile "addons-996992"
	I0914 16:45:10.622964   16725 addons.go:234] Setting addon cloud-spanner=true in "addons-996992"
	I0914 16:45:10.622970   16725 addons.go:69] Setting ingress-dns=true in profile "addons-996992"
	I0914 16:45:10.622976   16725 addons.go:234] Setting addon yakd=true in "addons-996992"
	I0914 16:45:10.622983   16725 addons.go:234] Setting addon ingress-dns=true in "addons-996992"
	I0914 16:45:10.622996   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623004   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623021   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.622933   16725 addons.go:69] Setting storage-provisioner=true in profile "addons-996992"
	I0914 16:45:10.623123   16725 addons.go:234] Setting addon storage-provisioner=true in "addons-996992"
	I0914 16:45:10.623142   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623344   16725 addons.go:69] Setting volumesnapshots=true in profile "addons-996992"
	I0914 16:45:10.623366   16725 addons.go:234] Setting addon volumesnapshots=true in "addons-996992"
	I0914 16:45:10.623392   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623393   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623426   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.623459   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623483   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.623506   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.623518   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.622951   16725 addons.go:234] Setting addon helm-tiller=true in "addons-996992"
	I0914 16:45:10.622917   16725 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-996992"
	I0914 16:45:10.623622   16725 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-996992"
	I0914 16:45:10.622926   16725 addons.go:69] Setting registry=true in profile "addons-996992"
	I0914 16:45:10.623646   16725 addons.go:234] Setting addon registry=true in "addons-996992"
	I0914 16:45:10.622961   16725 addons.go:69] Setting ingress=true in profile "addons-996992"
	I0914 16:45:10.623658   16725 addons.go:234] Setting addon ingress=true in "addons-996992"
	I0914 16:45:10.623672   16725 addons.go:69] Setting volcano=true in profile "addons-996992"
	I0914 16:45:10.623683   16725 addons.go:234] Setting addon volcano=true in "addons-996992"
	I0914 16:45:10.622914   16725 addons.go:69] Setting inspektor-gadget=true in profile "addons-996992"
	I0914 16:45:10.623704   16725 addons.go:69] Setting default-storageclass=true in profile "addons-996992"
	I0914 16:45:10.623713   16725 addons.go:234] Setting addon inspektor-gadget=true in "addons-996992"
	I0914 16:45:10.623717   16725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-996992"
	I0914 16:45:10.622909   16725 addons.go:69] Setting metrics-server=true in profile "addons-996992"
	I0914 16:45:10.623726   16725 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-996992"
	I0914 16:45:10.622926   16725 addons.go:69] Setting gcp-auth=true in profile "addons-996992"
	I0914 16:45:10.623734   16725 addons.go:234] Setting addon metrics-server=true in "addons-996992"
	I0914 16:45:10.623757   16725 mustload.go:65] Loading cluster: addons-996992
	I0914 16:45:10.623769   16725 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-996992"
	I0914 16:45:10.623852   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623914   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.623984   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624008   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624067   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624232   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624260   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624329   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624403   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624403   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.624463   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624746   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624786   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.624834   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.624904   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625011   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625036   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625228   16725 config.go:182] Loaded profile config "addons-996992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 16:45:10.625249   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625262   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625277   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625297   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625391   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.625433   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625392   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625866   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.625912   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.625973   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626017   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.626051   16725 out.go:177] * Verifying Kubernetes components...
	I0914 16:45:10.626257   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626289   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.626630   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.626698   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.631422   16725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:45:10.643737   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0914 16:45:10.644067   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0914 16:45:10.644260   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.643976   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0914 16:45:10.644937   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.644959   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645032   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.645109   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.645308   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.645466   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.645486   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645661   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.645674   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.645856   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.645968   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.646318   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.646363   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.646410   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.646443   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.658785   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.658848   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.659642   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.659689   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.668950   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0914 16:45:10.669202   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0914 16:45:10.673147   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.673249   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.674307   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.674330   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.674658   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.674677   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.674857   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.675190   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.675403   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.675458   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.680254   16725 addons.go:234] Setting addon default-storageclass=true in "addons-996992"
	I0914 16:45:10.680332   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.680709   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.680747   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.681169   16725 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-996992"
	I0914 16:45:10.681215   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.681572   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.681620   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.688239   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
	I0914 16:45:10.688935   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.689788   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.689818   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.690304   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.691113   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.691159   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.695403   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0914 16:45:10.695859   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.696143   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0914 16:45:10.697034   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.697057   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.697432   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.698006   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.698052   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.698627   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.699204   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.699227   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.699701   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.699944   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.700002   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0914 16:45:10.700177   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0914 16:45:10.701070   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.701617   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.701642   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.701707   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.702279   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.702857   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.702896   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.703130   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.703659   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.703682   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.704625   16725 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 16:45:10.705330   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.706061   16725 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:10.706078   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 16:45:10.706100   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.706896   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.706941   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.709826   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.710025   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0914 16:45:10.710585   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.710610   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.710663   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.710948   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.711126   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.711257   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.711463   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.712334   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0914 16:45:10.712445   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0914 16:45:10.712635   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.713188   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.713212   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.713557   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.714114   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.714187   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.714670   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.715212   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.715229   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.715594   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.716145   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.716181   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.718969   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.718990   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.719432   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.721094   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0914 16:45:10.721588   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.722010   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.722031   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.723638   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.724834   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0914 16:45:10.724994   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.725170   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.725465   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I0914 16:45:10.727414   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0914 16:45:10.727417   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.727415   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0914 16:45:10.727546   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.727570   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.727636   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:10.727648   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:10.727899   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:10.727912   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.727934   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:10.727946   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:10.727954   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:10.727962   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:10.728003   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.728073   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.728123   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.728189   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:10.728222   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:10.728238   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 16:45:10.728338   16725 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0914 16:45:10.728897   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.728950   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.729209   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I0914 16:45:10.729478   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.729509   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.729637   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.729966   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.729987   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.730120   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.730139   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.730398   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.730596   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.730665   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.731392   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.731538   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.731557   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.731611   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.732178   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.732245   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.732295   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0914 16:45:10.734574   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.734579   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.734688   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.734744   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0914 16:45:10.735001   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.735046   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.735825   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:10.736192   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.736223   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.736395   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.736576   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.736592   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.736948   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.737179   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.737197   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.737562   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.737591   16725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 16:45:10.737664   16725 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 16:45:10.738728   16725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:10.738746   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 16:45:10.738765   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.739421   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 16:45:10.739440   16725 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 16:45:10.739456   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.742843   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.743195   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.743228   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.743515   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.743739   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.743928   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.744098   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.744454   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0914 16:45:10.744602   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.744871   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.744902   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.745182   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.745420   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.745569   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.745740   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.746637   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.746670   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.746699   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.746715   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.747176   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.748001   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0914 16:45:10.748265   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.748278   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.748857   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.748894   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.749102   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.749338   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.749619   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.750242   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.750258   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.750658   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.751280   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:10.751315   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:10.751558   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.753110   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0914 16:45:10.753540   16725 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 16:45:10.753566   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.754094   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.754112   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.754480   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.754671   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.755075   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 16:45:10.755092   16725 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 16:45:10.755111   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.757604   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.758799   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.759063   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 16:45:10.759379   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.759413   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.759591   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.759777   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.759925   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.760043   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.761413   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 16:45:10.764401   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0914 16:45:10.764486   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0914 16:45:10.764653   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 16:45:10.764874   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.765386   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.765410   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.765758   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.765983   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.767246   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 16:45:10.767268   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.768228   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.768265   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.768284   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.768810   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.769040   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.769522   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 16:45:10.769526   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 16:45:10.770047   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0914 16:45:10.770470   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.770948   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.770965   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.771278   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.771438   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.772503   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:10.772561   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 16:45:10.773645   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:10.773685   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 16:45:10.773697   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.774893   16725 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0914 16:45:10.775085   16725 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:10.775109   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 16:45:10.775128   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.775683   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 16:45:10.775853   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36781
	I0914 16:45:10.775979   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0914 16:45:10.776073   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0914 16:45:10.776095   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.776267   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.776399   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0914 16:45:10.776756   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 16:45:10.776773   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 16:45:10.776776   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.776797   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.777646   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.777664   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.778321   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.778341   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.778636   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.779063   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.780072   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.780437   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.780455   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.780479   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.780653   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.780703   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.780834   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.780938   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.781043   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.781324   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.782798   16725 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 16:45:10.784596   16725 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 16:45:10.784747   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.784942   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.785509   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.785544   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.785572   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.785798   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.785836   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.786069   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.786108   16725 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 16:45:10.786123   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 16:45:10.786130   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.786141   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.786311   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.786443   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.786567   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.786865   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I0914 16:45:10.786927   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0914 16:45:10.787442   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.787449   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.787928   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.787944   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.788067   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.788078   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.788460   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.788499   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.788727   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.788782   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.789352   16725 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 16:45:10.789703   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.790004   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.790285   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.790558   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.790863   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.790882   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.791031   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.791217   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.791288   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.791539   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.791700   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.791780   16725 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 16:45:10.791796   16725 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 16:45:10.791815   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.793587   16725 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 16:45:10.793606   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.794824   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 16:45:10.794856   16725 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 16:45:10.794874   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.795591   16725 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 16:45:10.796399   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.796850   16725 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:10.796866   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 16:45:10.796869   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.796884   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.796884   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.797475   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.797658   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.797852   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.798050   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.798253   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0914 16:45:10.798969   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.799185   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.799677   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.799700   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.799747   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.799773   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.800030   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.800161   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.800242   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.800507   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.800594   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.800777   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.800785   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
	I0914 16:45:10.800907   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.801232   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.801253   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.801443   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.801712   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.801851   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.801916   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.802030   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.802669   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.803212   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.803239   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.803521   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.803699   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.803742   16725 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 16:45:10.804878   16725 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:10.804895   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 16:45:10.804911   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.805093   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.806643   16725 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 16:45:10.807521   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.807876   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.807899   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.808075   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.808222   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.808318   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.808404   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.808855   16725 out.go:177]   - Using image docker.io/busybox:stable
	I0914 16:45:10.809861   16725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:10.809873   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 16:45:10.809885   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.812131   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0914 16:45:10.812590   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:10.812888   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.813075   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:10.813094   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:10.813367   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.813384   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.813580   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.813714   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:10.813818   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.813904   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:10.813982   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.814121   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:10.815554   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:10.815750   16725 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:10.815759   16725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 16:45:10.815769   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:10.819041   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.819420   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:10.819448   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:10.819588   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:10.819749   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:10.819895   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:10.820000   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:11.053496   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 16:45:11.053527   16725 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 16:45:11.097975   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 16:45:11.098000   16725 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 16:45:11.124289   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0914 16:45:11.124318   16725 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0914 16:45:11.154793   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 16:45:11.154823   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 16:45:11.167635   16725 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 16:45:11.167664   16725 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 16:45:11.184834   16725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:45:11.184857   16725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 16:45:11.195055   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:11.210697   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:11.248543   16725 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 16:45:11.248570   16725 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 16:45:11.259633   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:11.260194   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 16:45:11.260211   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 16:45:11.270256   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 16:45:11.270287   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 16:45:11.323366   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:11.328598   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:11.337140   16725 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:11.337159   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 16:45:11.338365   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 16:45:11.338383   16725 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 16:45:11.341295   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:11.348260   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:11.367015   16725 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:45:11.367039   16725 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0914 16:45:11.367119   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 16:45:11.367130   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 16:45:11.373728   16725 node_ready.go:35] waiting up to 6m0s for node "addons-996992" to be "Ready" ...
	I0914 16:45:11.378694   16725 node_ready.go:49] node "addons-996992" has status "Ready":"True"
	I0914 16:45:11.378721   16725 node_ready.go:38] duration metric: took 4.969428ms for node "addons-996992" to be "Ready" ...
	I0914 16:45:11.378733   16725 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:11.384893   16725 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:11.413618   16725 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 16:45:11.413646   16725 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 16:45:11.437356   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 16:45:11.437390   16725 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 16:45:11.454900   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 16:45:11.454926   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 16:45:11.476373   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:45:11.486849   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:11.516082   16725 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:11.516112   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 16:45:11.529228   16725 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 16:45:11.529258   16725 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 16:45:11.532615   16725 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 16:45:11.532647   16725 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 16:45:11.572481   16725 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:11.572521   16725 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 16:45:11.615905   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 16:45:11.615938   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 16:45:11.665213   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:11.685127   16725 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 16:45:11.685162   16725 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 16:45:11.707538   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 16:45:11.707569   16725 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 16:45:11.735433   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:11.795975   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 16:45:11.796003   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 16:45:11.860384   16725 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 16:45:11.860415   16725 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 16:45:11.885579   16725 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:11.885602   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 16:45:11.939398   16725 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 16:45:11.939428   16725 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 16:45:12.071279   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:12.076177   16725 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 16:45:12.076212   16725 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 16:45:12.193047   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 16:45:12.193067   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 16:45:12.350531   16725 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:12.350553   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 16:45:12.571518   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:12.589231   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 16:45:12.589261   16725 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 16:45:12.822425   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 16:45:12.822449   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 16:45:12.981922   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 16:45:12.981946   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 16:45:13.289971   16725 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:13.289994   16725 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 16:45:13.432574   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:13.662491   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:13.691925   16725 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.507036024s)
	I0914 16:45:13.691964   16725 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 16:45:13.983899   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.788807127s)
	I0914 16:45:13.983965   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:13.983978   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:13.984306   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:13.984324   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:13.984333   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:13.984341   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:13.984593   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:13.984610   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:14.263792   16725 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-996992" context rescaled to 1 replicas
	I0914 16:45:15.107060   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.896323905s)
	I0914 16:45:15.107126   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.107142   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.107451   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.107471   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.107471   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.107483   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.107491   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.107708   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.107721   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.448055   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:15.802644   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.542973946s)
	I0914 16:45:15.802658   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.479250603s)
	I0914 16:45:15.802693   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.802710   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.802698   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.802765   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803023   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803044   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803090   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.803101   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.803112   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803052   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803049   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803183   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.803193   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:15.803200   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:15.803427   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803495   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:15.803536   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.803549   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:15.804919   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:15.804939   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:17.807492   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 16:45:17.807535   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:17.810372   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:17.810780   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:17.810816   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:17.810957   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:17.811136   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:17.811330   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:17.811482   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:17.922407   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:18.212498   16725 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 16:45:18.361996   16725 addons.go:234] Setting addon gcp-auth=true in "addons-996992"
	I0914 16:45:18.362064   16725 host.go:66] Checking if "addons-996992" exists ...
	I0914 16:45:18.362615   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:18.362669   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:18.378887   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I0914 16:45:18.379466   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:18.380023   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:18.380052   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:18.380398   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:18.380840   16725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 16:45:18.380878   16725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 16:45:18.397216   16725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0914 16:45:18.397733   16725 main.go:141] libmachine: () Calling .GetVersion
	I0914 16:45:18.398249   16725 main.go:141] libmachine: Using API Version  1
	I0914 16:45:18.398279   16725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 16:45:18.398627   16725 main.go:141] libmachine: () Calling .GetMachineName
	I0914 16:45:18.398815   16725 main.go:141] libmachine: (addons-996992) Calling .GetState
	I0914 16:45:18.400541   16725 main.go:141] libmachine: (addons-996992) Calling .DriverName
	I0914 16:45:18.400765   16725 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 16:45:18.400791   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHHostname
	I0914 16:45:18.403800   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:18.404197   16725 main.go:141] libmachine: (addons-996992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8c:90", ip: ""} in network mk-addons-996992: {Iface:virbr1 ExpiryTime:2024-09-14 17:44:42 +0000 UTC Type:0 Mac:52:54:00:dd:8c:90 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-996992 Clientid:01:52:54:00:dd:8c:90}
	I0914 16:45:18.404228   16725 main.go:141] libmachine: (addons-996992) DBG | domain addons-996992 has defined IP address 192.168.39.189 and MAC address 52:54:00:dd:8c:90 in network mk-addons-996992
	I0914 16:45:18.404369   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHPort
	I0914 16:45:18.404558   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHKeyPath
	I0914 16:45:18.404701   16725 main.go:141] libmachine: (addons-996992) Calling .GetSSHUsername
	I0914 16:45:18.404877   16725 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/addons-996992/id_rsa Username:docker}
	I0914 16:45:19.293405   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.964772062s)
	I0914 16:45:19.293466   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293469   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.952144922s)
	I0914 16:45:19.293515   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293537   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293479   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293535   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.945253778s)
	I0914 16:45:19.293646   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.817244353s)
	I0914 16:45:19.293653   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293667   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293671   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293682   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293679   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.806797648s)
	I0914 16:45:19.293729   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.628484398s)
	I0914 16:45:19.293741   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293749   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293760   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293762   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293784   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.558321173s)
	I0914 16:45:19.293801   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.293811   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.293887   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.222576723s)
	W0914 16:45:19.293930   16725 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:19.293976   16725 retry.go:31] will retry after 361.189184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:19.294023   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294024   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294035   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294042   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294038   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294048   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294054   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294066   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294075   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294081   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294098   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.722532317s)
	I0914 16:45:19.294126   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294139   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294145   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294181   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294190   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294128   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294211   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294219   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294225   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294243   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294268   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294198   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294284   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294288   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294296   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294304   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294311   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294338   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294352   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294363   16725 addons.go:475] Verifying addon metrics-server=true in "addons-996992"
	I0914 16:45:19.294368   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294386   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294392   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294399   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294405   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294869   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294897   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294903   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.294910   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.294916   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.294965   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.294985   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.294993   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.295199   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.295218   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.295240   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.295246   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297056   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297087   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297093   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297100   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.297106   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.297194   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297214   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297221   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297458   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297469   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297479   16725 addons.go:475] Verifying addon ingress=true in "addons-996992"
	I0914 16:45:19.297608   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297828   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:19.297852   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297858   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297867   16725 addons.go:475] Verifying addon registry=true in "addons-996992"
	I0914 16:45:19.297564   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.297990   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.297592   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.298014   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.299712   16725 out.go:177] * Verifying ingress addon...
	I0914 16:45:19.300586   16725 out.go:177] * Verifying registry addon...
	I0914 16:45:19.300595   16725 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-996992 service yakd-dashboard -n yakd-dashboard
	
	I0914 16:45:19.302049   16725 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 16:45:19.302931   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 16:45:19.344991   16725 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 16:45:19.345020   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:19.345383   16725 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 16:45:19.345406   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:19.372208   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.372232   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.372506   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.372522   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	W0914 16:45:19.372615   16725 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 16:45:19.383702   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:19.383730   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:19.384014   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:19.384038   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:19.655329   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:20.045338   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:20.050206   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.055704   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.310964   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.311082   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.682921   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.020377333s)
	I0914 16:45:20.682968   16725 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.282185443s)
	I0914 16:45:20.682969   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:20.682986   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:20.683282   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:20.683301   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:20.683311   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:20.683320   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:20.683581   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:20.683592   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:20.683609   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:20.683625   16725 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-996992"
	I0914 16:45:20.684836   16725 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:20.685652   16725 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 16:45:20.687381   16725 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 16:45:20.688045   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 16:45:20.688683   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 16:45:20.688704   16725 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 16:45:20.699808   16725 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 16:45:20.699830   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:20.760828   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 16:45:20.760854   16725 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 16:45:20.806360   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.808190   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.876308   16725 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:20.876331   16725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 16:45:20.962823   16725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:21.194364   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.308241   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.308330   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:21.459476   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.804100826s)
	I0914 16:45:21.459541   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:21.459563   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:21.459818   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:21.459856   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:21.459870   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:21.459878   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:21.460217   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:21.460243   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:21.460259   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:21.692747   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.824936   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.825463   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.037036   16725 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.074172157s)
	I0914 16:45:22.037089   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:22.037108   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:22.037385   16725 main.go:141] libmachine: (addons-996992) DBG | Closing plugin on server side
	I0914 16:45:22.037437   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:22.037456   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:22.037470   16725 main.go:141] libmachine: Making call to close driver server
	I0914 16:45:22.037478   16725 main.go:141] libmachine: (addons-996992) Calling .Close
	I0914 16:45:22.037812   16725 main.go:141] libmachine: Successfully made call to close driver server
	I0914 16:45:22.037826   16725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 16:45:22.039855   16725 addons.go:475] Verifying addon gcp-auth=true in "addons-996992"
	I0914 16:45:22.041190   16725 out.go:177] * Verifying gcp-auth addon...
	I0914 16:45:22.043315   16725 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 16:45:22.062131   16725 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:45:22.062174   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:22.206114   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.305919   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.307902   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:22.397413   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:22.548345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:22.692725   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.829322   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.829369   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.047052   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:23.193924   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.306209   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.307371   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.547918   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:23.693915   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.806505   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.808215   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.047225   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:24.195089   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.311883   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.312000   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.547845   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:24.693213   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.807438   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.807893   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.892150   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:25.047378   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:25.193183   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.308297   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:25.308656   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.547425   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:25.695489   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.807000   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.807151   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.047297   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:26.192551   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.306770   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.307157   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:26.548995   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:26.692772   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.807385   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.808205   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.052696   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:27.195215   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.307090   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.307252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:27.392113   16725 pod_ready.go:98] pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.189 HostIPs:[{IP:192.168.39
.189}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-14 16:45:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:45:14 +0000 UTC,FinishedAt:2024-09-14 16:45:24 +0000 UTC,ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c Started:0xc0029481a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d01430} {Name:kube-api-access-gv6ld MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001d01440}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:45:27.392141   16725 pod_ready.go:82] duration metric: took 16.007223581s for pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace to be "Ready" ...
	E0914 16:45:27.392157   16725 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-2m4xb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:27 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:45:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.189 HostIPs:[{IP:192.168.39.189}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-14 16:45:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:45:14 +0000 UTC,FinishedAt:2024-09-14 16:45:24 +0000 UTC,ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d8ccb9ffdaa78f4f6b1f3a7ed75959532f0d411be1e2cde7aafffc2ed35e4c0c Started:0xc0029481a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001d01430} {Name:kube-api-access-gv6ld MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001d01440}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:45:27.392172   16725 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:27.547236   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:27.692797   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.805927   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.808529   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.046967   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:28.193365   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.306453   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.306996   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.547515   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:28.692136   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.805564   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.808148   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.047966   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:29.192746   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.306293   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.307762   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.397654   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:29.546652   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:29.692992   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.806654   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.807372   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.048650   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.200286   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.307076   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.307351   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.547222   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.692129   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.806326   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.806696   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.047541   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.193463   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.306316   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.306957   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.400132   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:31.547554   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.691976   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.806039   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.807935   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.046311   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.193223   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.305895   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:32.306116   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.547547   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.693274   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.806864   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.807025   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.046675   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.192788   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.307118   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.307576   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.547264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.691956   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.805950   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.807272   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.898447   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:34.046538   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.193111   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.306594   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:34.306780   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.547534   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.693573   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.806532   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.807796   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.049173   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.193341   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.306957   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.307826   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.547124   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.693884   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.813240   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.813472   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.898771   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:36.046736   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.192647   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.307028   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:36.307153   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:36.550055   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.692268   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.808196   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:36.808552   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.047345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.192191   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.306427   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:37.306615   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.546905   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.693413   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.806415   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:37.806625   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.906344   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:38.047348   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.192226   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.307259   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.308416   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:38.549806   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.693516   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.806779   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.807117   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:39.047166   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.193398   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.305796   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.306965   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:39.546569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.692192   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.807726   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.809337   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:40.047029   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.198177   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.306487   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:40.306759   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.398546   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:40.546426   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.692436   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.807118   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.808125   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.048639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.193023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.306385   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.307022   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:41.546832   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.692299   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.806619   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:41.807745   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.051127   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.193235   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.306207   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.307023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:42.547148   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.692114   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.807237   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.807551   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:42.898978   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:43.047443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.192717   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.306429   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.307536   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:43.547361   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.692472   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.806328   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.806544   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.047256   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.193079   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.307376   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.307539   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.546600   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.947832   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:44.948674   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.949499   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.954329   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:45.047207   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.192019   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.307059   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:45.307388   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:45.546442   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.693013   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.807362   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:45.808026   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.049098   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.193102   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.307108   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:46.307421   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.548460   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.692457   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.807661   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:46.807813   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.048241   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.192214   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.306248   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.306671   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:47.398101   16725 pod_ready.go:103] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:47.547639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.693105   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.806345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:47.806838   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.898498   16725 pod_ready.go:93] pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.898523   16725 pod_ready.go:82] duration metric: took 20.506341334s for pod "coredns-7c65d6cfc9-9p6z9" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.898537   16725 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.903604   16725 pod_ready.go:93] pod "etcd-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.903629   16725 pod_ready.go:82] duration metric: took 5.083745ms for pod "etcd-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.903640   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.908397   16725 pod_ready.go:93] pod "kube-apiserver-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.908426   16725 pod_ready.go:82] duration metric: took 4.777526ms for pod "kube-apiserver-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.908439   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.918027   16725 pod_ready.go:93] pod "kube-controller-manager-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.918048   16725 pod_ready.go:82] duration metric: took 9.601319ms for pod "kube-controller-manager-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.918056   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ll2cd" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.923629   16725 pod_ready.go:93] pod "kube-proxy-ll2cd" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:47.923659   16725 pod_ready.go:82] duration metric: took 5.594635ms for pod "kube-proxy-ll2cd" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:47.923671   16725 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:48.047579   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.193569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.296378   16725 pod_ready.go:93] pod "kube-scheduler-addons-996992" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:48.296405   16725 pod_ready.go:82] duration metric: took 372.727475ms for pod "kube-scheduler-addons-996992" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:48.296414   16725 pod_ready.go:39] duration metric: took 36.917662966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:48.296429   16725 api_server.go:52] waiting for apiserver process to appear ...
	I0914 16:45:48.296474   16725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:45:48.307319   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:48.308769   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.333952   16725 api_server.go:72] duration metric: took 37.711200096s to wait for apiserver process to appear ...
	I0914 16:45:48.333977   16725 api_server.go:88] waiting for apiserver healthz status ...
	I0914 16:45:48.333995   16725 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0914 16:45:48.338947   16725 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0914 16:45:48.340137   16725 api_server.go:141] control plane version: v1.31.1
	I0914 16:45:48.340167   16725 api_server.go:131] duration metric: took 6.183106ms to wait for apiserver health ...
	I0914 16:45:48.340177   16725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 16:45:48.504689   16725 system_pods.go:59] 18 kube-system pods found
	I0914 16:45:48.504742   16725 system_pods.go:61] "coredns-7c65d6cfc9-9p6z9" [8b60a487-876e-49a1-9a02-ff29269e6cd9] Running
	I0914 16:45:48.504756   16725 system_pods.go:61] "csi-hostpath-attacher-0" [fc163c87-b3c1-44fb-b23a-daf71f2476fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:48.504781   16725 system_pods.go:61] "csi-hostpath-resizer-0" [cb3dc269-4b68-41cc-8dac-f4e4cac02923] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:48.504800   16725 system_pods.go:61] "csi-hostpathplugin-j8fzx" [4c687703-e40a-48df-9dbf-ef6c5b71f2c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:48.504806   16725 system_pods.go:61] "etcd-addons-996992" [51dddf60-7bb8-4d07-b593-4841d49d04c6] Running
	I0914 16:45:48.504812   16725 system_pods.go:61] "kube-apiserver-addons-996992" [df7a9746-e613-42b3-99ae-376c32e5c9c5] Running
	I0914 16:45:48.504818   16725 system_pods.go:61] "kube-controller-manager-addons-996992" [d0f2e301-3365-4b32-8aa6-583d2794b9d1] Running
	I0914 16:45:48.504829   16725 system_pods.go:61] "kube-ingress-dns-minikube" [9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18] Running
	I0914 16:45:48.504835   16725 system_pods.go:61] "kube-proxy-ll2cd" [77c4fbce-cceb-4918-871f-5d17932941f1] Running
	I0914 16:45:48.504840   16725 system_pods.go:61] "kube-scheduler-addons-996992" [e9922ffd-3c61-47c3-a0d0-2063f8e8484d] Running
	I0914 16:45:48.504848   16725 system_pods.go:61] "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:48.504854   16725 system_pods.go:61] "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
	I0914 16:45:48.504866   16725 system_pods.go:61] "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 16:45:48.504876   16725 system_pods.go:61] "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:48.504890   16725 system_pods.go:61] "snapshot-controller-56fcc65765-cc2vz" [4663132f-a286-4aed-8845-8c2fb27ac546] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.504900   16725 system_pods.go:61] "snapshot-controller-56fcc65765-l6fxq" [719471e2-a6ad-4742-92a5-2ca1874e373c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.504906   16725 system_pods.go:61] "storage-provisioner" [042983c1-0076-46d0-8022-ff8afde6de61] Running
	I0914 16:45:48.504920   16725 system_pods.go:61] "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:45:48.504928   16725 system_pods.go:74] duration metric: took 164.743813ms to wait for pod list to return data ...
	I0914 16:45:48.504942   16725 default_sa.go:34] waiting for default service account to be created ...
	I0914 16:45:48.546545   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.692466   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.696319   16725 default_sa.go:45] found service account: "default"
	I0914 16:45:48.696367   16725 default_sa.go:55] duration metric: took 191.418164ms for default service account to be created ...
	I0914 16:45:48.696376   16725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 16:45:48.808682   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:48.808951   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.920544   16725 system_pods.go:86] 18 kube-system pods found
	I0914 16:45:48.920575   16725 system_pods.go:89] "coredns-7c65d6cfc9-9p6z9" [8b60a487-876e-49a1-9a02-ff29269e6cd9] Running
	I0914 16:45:48.920585   16725 system_pods.go:89] "csi-hostpath-attacher-0" [fc163c87-b3c1-44fb-b23a-daf71f2476fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:48.920592   16725 system_pods.go:89] "csi-hostpath-resizer-0" [cb3dc269-4b68-41cc-8dac-f4e4cac02923] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:48.920600   16725 system_pods.go:89] "csi-hostpathplugin-j8fzx" [4c687703-e40a-48df-9dbf-ef6c5b71f2c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:48.920604   16725 system_pods.go:89] "etcd-addons-996992" [51dddf60-7bb8-4d07-b593-4841d49d04c6] Running
	I0914 16:45:48.920608   16725 system_pods.go:89] "kube-apiserver-addons-996992" [df7a9746-e613-42b3-99ae-376c32e5c9c5] Running
	I0914 16:45:48.920612   16725 system_pods.go:89] "kube-controller-manager-addons-996992" [d0f2e301-3365-4b32-8aa6-583d2794b9d1] Running
	I0914 16:45:48.920616   16725 system_pods.go:89] "kube-ingress-dns-minikube" [9bd2d610-b1b0-4b09-a8f9-29d24b8f3e18] Running
	I0914 16:45:48.920619   16725 system_pods.go:89] "kube-proxy-ll2cd" [77c4fbce-cceb-4918-871f-5d17932941f1] Running
	I0914 16:45:48.920623   16725 system_pods.go:89] "kube-scheduler-addons-996992" [e9922ffd-3c61-47c3-a0d0-2063f8e8484d] Running
	I0914 16:45:48.920629   16725 system_pods.go:89] "metrics-server-84c5f94fbc-zpthv" [5adc8bfb-2fb3-4e13-8b04-98e98afe35a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:48.920633   16725 system_pods.go:89] "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
	I0914 16:45:48.920640   16725 system_pods.go:89] "registry-66c9cd494c-jdr7n" [1fa84874-319a-4e4a-9126-b618e477b31e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 16:45:48.920645   16725 system_pods.go:89] "registry-proxy-b9ffc" [44b082a1-dd9e-4251-a141-6f0578d54a17] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:48.920652   16725 system_pods.go:89] "snapshot-controller-56fcc65765-cc2vz" [4663132f-a286-4aed-8845-8c2fb27ac546] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.920660   16725 system_pods.go:89] "snapshot-controller-56fcc65765-l6fxq" [719471e2-a6ad-4742-92a5-2ca1874e373c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:48.920664   16725 system_pods.go:89] "storage-provisioner" [042983c1-0076-46d0-8022-ff8afde6de61] Running
	I0914 16:45:48.920669   16725 system_pods.go:89] "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:45:48.920677   16725 system_pods.go:126] duration metric: took 224.295642ms to wait for k8s-apps to be running ...
	I0914 16:45:48.920684   16725 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 16:45:48.920724   16725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 16:45:48.937847   16725 system_svc.go:56] duration metric: took 17.154195ms WaitForService to wait for kubelet
	I0914 16:45:48.937878   16725 kubeadm.go:582] duration metric: took 38.315130323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:45:48.937899   16725 node_conditions.go:102] verifying NodePressure condition ...
	I0914 16:45:49.048228   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.098325   16725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 16:45:49.098385   16725 node_conditions.go:123] node cpu capacity is 2
	I0914 16:45:49.098398   16725 node_conditions.go:105] duration metric: took 160.494508ms to run NodePressure ...
	I0914 16:45:49.098410   16725 start.go:241] waiting for startup goroutines ...
	I0914 16:45:49.192082   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.306218   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.307323   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:49.547409   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.692860   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.807027   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.813086   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.047555   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.192775   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.306264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.306398   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:50.547544   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.692765   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.806990   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:50.807136   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.047419   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.192036   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.306859   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:51.307240   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.546636   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.692296   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.807294   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.807691   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:52.046611   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.193349   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.306306   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.307173   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:52.547079   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.691900   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.806428   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.807573   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:53.046699   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.192419   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.306755   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:53.307712   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.552730   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.693022   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.805998   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.807006   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.047063   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.195701   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.308158   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:54.308170   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.547515   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.693931   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.806765   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:54.807175   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.047742   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.194005   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.306209   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.307788   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:55.546984   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.693279   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.807163   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:55.807663   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.052639   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.193934   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.317185   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:56.322650   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.547946   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.692907   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.812014   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:56.812358   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.047127   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.193740   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.307143   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:57.307407   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.547562   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.693212   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.806535   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:57.806710   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.046520   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.197798   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.307070   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:58.307765   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.547433   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.692299   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.806831   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:58.807481   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.046934   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.193174   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.307443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:59.307669   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.548010   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.693092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.807151   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:59.808268   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.047359   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.478614   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:00.479137   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.479508   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.547104   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.692282   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.806824   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:00.807536   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.047697   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.193726   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.307966   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.308014   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:01.547201   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.695313   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.806792   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.807383   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:02.047607   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.192475   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.306347   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:02.306833   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.547377   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.692730   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.807047   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.807463   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:03.047309   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.195015   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.307647   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:03.307817   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.547787   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.692947   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.807157   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.807344   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:04.048006   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.192987   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.318549   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:04.318994   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.547383   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.693036   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.805898   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.807705   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.047059   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:05.193631   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.306513   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.306799   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:05.546629   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:05.692830   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.806493   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:05.806880   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.046580   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:06.192054   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.306131   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.307575   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:06.547492   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:06.692615   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.806368   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.806725   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.046496   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:07.192627   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.311557   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.311733   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:07.547642   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:07.693080   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.806770   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:07.807306   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.047553   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:08.193062   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:08.306216   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.306825   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:08.547432   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:08.693198   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:08.806659   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.807567   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:09.046856   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:09.193443   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:09.306323   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.308192   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:09.547245   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:09.692407   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:09.807106   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.809300   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.050073   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:10.192821   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:10.307140   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.307386   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:10.547008   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:10.692575   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:10.806819   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:10.808404   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.047532   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:11.194303   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:11.306378   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.306880   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:11.547761   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:11.692624   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:11.811199   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.811447   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:12.047345   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:12.193374   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:12.306143   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.308049   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:12.546681   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:12.693001   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:12.806422   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.806748   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:13.046519   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:13.632563   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:13.632569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:13.633214   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.633245   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:13.692680   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:13.806502   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.808264   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:14.047109   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:14.193313   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:14.305768   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.307495   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:14.547099   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:14.693347   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:14.806645   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.807536   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:15.046459   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:15.192401   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:15.307521   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:46:15.307739   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.548447   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:15.693811   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:15.805918   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.806859   16725 kapi.go:107] duration metric: took 56.503923107s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 16:46:16.046482   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:16.192234   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:16.306338   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:16.547377   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.214224   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.214920   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.218540   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.221430   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.315378   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.551452   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:17.694597   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:17.806145   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.046558   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:18.192092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:18.305661   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.547539   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:18.692638   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:18.806657   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.053521   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:19.193880   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:19.311277   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.546622   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:19.693339   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:19.806264   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.046500   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:20.192998   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:20.306067   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.547197   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:20.692597   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:20.807811   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:21.047801   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:21.192778   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:21.306452   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:21.547311   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:21.693049   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:21.827840   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.047273   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:22.192310   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:22.311209   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.838565   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:22.838932   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:22.839032   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.047177   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:23.193709   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.306794   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:23.547596   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:23.692382   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:23.807214   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:24.046485   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:24.192341   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:24.307183   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:24.546672   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:24.693935   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:24.810550   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:25.050252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:25.195092   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:25.307161   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:25.549697   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:25.697541   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:25.806080   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:26.046708   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:26.192705   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:26.306674   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:26.547507   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:26.693182   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:26.806532   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:27.049050   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:27.196252   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:27.308707   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:27.547747   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:27.692965   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:27.807158   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:28.048325   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:28.193153   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:28.306290   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:28.546673   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:28.692592   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:28.806423   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:29.047119   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:29.193334   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:29.306364   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:29.547235   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:29.697436   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:29.807863   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:30.055007   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:30.193621   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:30.306752   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:30.547587   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:30.693117   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:30.806296   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:31.046378   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:31.193611   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:31.306059   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:31.546599   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:31.692393   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:31.806618   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:32.047197   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:32.199989   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:32.658958   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:32.659665   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:32.693594   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:32.813854   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:33.046793   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:33.194323   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:33.306864   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:33.547559   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:33.693855   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:33.808730   16725 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:34.048970   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:34.194651   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:34.307090   16725 kapi.go:107] duration metric: took 1m15.005037262s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 16:46:34.546875   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:34.694388   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:35.083057   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:35.193569   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:35.549326   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:35.692860   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:36.047852   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:36.192896   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:36.547520   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:36.693004   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:37.047621   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.192802   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:37.547115   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.707625   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:38.047500   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.192485   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:38.547359   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.692532   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:39.048815   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.192850   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:39.547858   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.693239   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:40.048117   16725 kapi.go:107] duration metric: took 1m18.00480647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 16:46:40.049808   16725 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-996992 cluster.
	I0914 16:46:40.050997   16725 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 16:46:40.052104   16725 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 16:46:40.193221   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:40.693480   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:41.192757   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:41.707864   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:42.193577   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:42.693176   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:43.192560   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.006023   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.193094   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:44.693734   16725 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:45.193109   16725 kapi.go:107] duration metric: took 1m24.505060721s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 16:46:45.194961   16725 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0914 16:46:45.196167   16725 addons.go:510] duration metric: took 1m34.573399474s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0914 16:46:45.196214   16725 start.go:246] waiting for cluster config update ...
	I0914 16:46:45.196250   16725 start.go:255] writing updated cluster config ...
	I0914 16:46:45.196519   16725 ssh_runner.go:195] Run: rm -f paused
	I0914 16:46:45.248928   16725 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 16:46:45.250609   16725 out.go:177] * Done! kubectl is now configured to use "addons-996992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.213037016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333229213010684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f5bd578-9f72-4e12-9d5f-7b5939252d5b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.213560018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7be460f-f29f-4ffb-9afb-a66af7b911d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.213625381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7be460f-f29f-4ffb-9afb-a66af7b911d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.213876119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314
086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7be460f-f29f-4ffb-9afb-a66af7b911d2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.250550376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6209d04-7afa-4ac3-9c7d-d8c60591e1c7 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.250621893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6209d04-7afa-4ac3-9c7d-d8c60591e1c7 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.251823347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0029520-b95b-4e57-a175-7fadff15c928 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.252991492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333229252963265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0029520-b95b-4e57-a175-7fadff15c928 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.253457241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ca6e4f2-5819-4554-91e1-8400490b711f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.253514266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ca6e4f2-5819-4554-91e1-8400490b711f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.253787562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314
086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ca6e4f2-5819-4554-91e1-8400490b711f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.286327530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6364d9cb-5c1c-4a37-b7e1-20ad17a1d932 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.286413274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6364d9cb-5c1c-4a37-b7e1-20ad17a1d932 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.288037452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=595ad670-1bca-489f-99f2-d94eb7c181e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.289388105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333229289361433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=595ad670-1bca-489f-99f2-d94eb7c181e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.290025129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5762105e-a016-49a9-9774-c48339a25797 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.290130043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5762105e-a016-49a9-9774-c48339a25797 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.290482095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314
086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5762105e-a016-49a9-9774-c48339a25797 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.331872045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9be90ce0-7b0e-4176-8090-36ce7d1d436e name=/runtime.v1.RuntimeService/Version
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.331954984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9be90ce0-7b0e-4176-8090-36ce7d1d436e name=/runtime.v1.RuntimeService/Version
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.333166245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=022be91d-4c69-43af-bee8-077df55af2c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.334326371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333229334300480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=022be91d-4c69-43af-bee8-077df55af2c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.334877376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0ccdbf2-fe95-416a-891d-f0e99685dfd8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.334945444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0ccdbf2-fe95-416a-891d-f0e99685dfd8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:00:29 addons-996992 crio[669]: time="2024-09-14 17:00:29.335261392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70d1675e8bf6137dbed4c2c8ba1ede0a600a3f0ce9709b8d818063a813154f29,PodSandboxId:5c1407129f05aa2651ba95000da59da45faa9159fc388ccf76ab45c67c52a2fb,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726333091273345411,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lf7nc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68e73e62-5b8c-43a1-b47c-fe3aac3fc269,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c842e27b9de8f75edb92b055573943c47a606fe40f848399f1053c007349d9,PodSandboxId:8164a72938eecad1dd7f3da09097d7f0e2eb28f6662a0578bf01cfed126c0a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726332950251160999,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9aa988e-e59a-44dd-84f7-753b4db11866,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188,PodSandboxId:5ac3aa6b762eab56fc54827e80aa928dfdebefbe7ea497526be6db2fe05f6299,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726332399300943119,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-smf6s,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: d9222c8a-bc84-4c20-b546-3034abbe136c,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8,PodSandboxId:1aa3f3cb51004f04eaa4495d7b5529a80931a47297b10c4983a9dd325aac62e6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726332355816734887,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-zpthv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5adc8bfb-2fb3-4e13-8b04-98e98afe35a9,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a,PodSandboxId:5527e3f395706db407371bb2608d8833aefcda9f876b3d2d02c803c0c8b8952d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726332317456742589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042983c1-0076-46d0-8022-ff8afde6de61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f,PodSandboxId:1c0a11c1d7f7c5b63da3f4f21f46979730ced4066e78410da9748fd11ab03ced,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726332314
086297257,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9p6z9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b60a487-876e-49a1-9a02-ff29269e6cd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a,PodSandboxId:816f86f6b29aba5b03d04c7c4802bafefe963ba526eeadfef60a9836d272fa1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726332313248815620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ll2cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77c4fbce-cceb-4918-871f-5d17932941f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2,PodSandboxId:25abc346c251645814f2dd057edf9854dd7da6fc11dfd725d187a5f4ca0cae6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726332300335194420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72d75e1235958269ae116b8dd976eed9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d,PodSandboxId:ce41d60ed0525314b5cc45be11a88a4496c36c13e3962371b7f675683952dd36,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726332300342876293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b56682c56107b7e0cb4de8d53e660eb,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309,PodSandboxId:15fa01d2627fb717b0a294d421654f46fec23ec17c8a8fcaaf18285b094f7812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726332300313273026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82e0846c71461dab7faa1185c43b9171,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5,PodSandboxId:476c6d893727453e5ceaf6f071c23932b5a6bb6e8630053bd70d7c0e05079db0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3
d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726332300190893967,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-996992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2361d2ae563c8cc1b1f997c60c5996b9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0ccdbf2-fe95-416a-891d-f0e99685dfd8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70d1675e8bf61       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   5c1407129f05a       hello-world-app-55bf9c44b4-lf7nc
	a2c842e27b9de       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago       Running             nginx                     0                   8164a72938eec       nginx
	b1fc29dced5ee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago      Running             gcp-auth                  0                   5ac3aa6b762ea       gcp-auth-89d5ffd79-smf6s
	e8c78f14b17e7       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   1aa3f3cb51004       metrics-server-84c5f94fbc-zpthv
	7f90cf12b4313       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   5527e3f395706       storage-provisioner
	b39fe7c77bdab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   1c0a11c1d7f7c       coredns-7c65d6cfc9-9p6z9
	7636b49f23d35       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   816f86f6b29ab       kube-proxy-ll2cd
	62ccf13035320       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   ce41d60ed0525       kube-scheduler-addons-996992
	9e180103456d1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   25abc346c2516       kube-apiserver-addons-996992
	244c994b666b9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   15fa01d2627fb       etcd-addons-996992
	b6da48572a3f2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   476c6d8937274       kube-controller-manager-addons-996992
	
	
	==> coredns [b39fe7c77bdab915cba2003e37d8a932f93c89d2816b66c662cd2b87f189195f] <==
	[INFO] 127.0.0.1:41202 - 28347 "HINFO IN 1673696776001178715.7846265792048933670. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013145705s
	[INFO] 10.244.0.6:33528 - 34854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000861082s
	[INFO] 10.244.0.6:33528 - 56874 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000509102s
	[INFO] 10.244.0.6:49882 - 44252 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179055s
	[INFO] 10.244.0.6:49882 - 26330 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000086967s
	[INFO] 10.244.0.6:56229 - 8877 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096878s
	[INFO] 10.244.0.6:56229 - 29867 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094082s
	[INFO] 10.244.0.6:60530 - 59893 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128321s
	[INFO] 10.244.0.6:60530 - 13042 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157038s
	[INFO] 10.244.0.6:59365 - 64212 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145076s
	[INFO] 10.244.0.6:59365 - 23496 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000053277s
	[INFO] 10.244.0.6:38693 - 47172 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000089079s
	[INFO] 10.244.0.6:38693 - 34881 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000266922s
	[INFO] 10.244.0.6:57815 - 40259 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061127s
	[INFO] 10.244.0.6:57815 - 21061 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054151s
	[INFO] 10.244.0.6:54487 - 49983 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049761s
	[INFO] 10.244.0.6:54487 - 43833 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105815s
	[INFO] 10.244.0.22:49719 - 23493 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000476893s
	[INFO] 10.244.0.22:58157 - 28044 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000101631s
	[INFO] 10.244.0.22:49755 - 34273 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139903s
	[INFO] 10.244.0.22:34695 - 62237 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115272s
	[INFO] 10.244.0.22:38487 - 8705 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122294s
	[INFO] 10.244.0.22:34286 - 15998 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008471s
	[INFO] 10.244.0.22:36588 - 36023 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002660038s
	[INFO] 10.244.0.22:43999 - 38790 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000715506s
	
	
	==> describe nodes <==
	Name:               addons-996992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-996992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=addons-996992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T16_45_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-996992
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:45:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-996992
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:00:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 16:58:42 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 16:58:42 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 16:58:42 +0000   Sat, 14 Sep 2024 16:45:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 16:58:42 +0000   Sat, 14 Sep 2024 16:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    addons-996992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e2b58bc38a04bd6877d6321c8c25636
	  System UUID:                5e2b58bc-38a0-4bd6-877d-6321c8c25636
	  Boot ID:                    bc515e37-5984-41bc-90ff-4a341c7992e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-lf7nc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  gcp-auth                    gcp-auth-89d5ffd79-smf6s                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-9p6z9                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-996992                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-996992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-996992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-ll2cd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-996992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-zpthv          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-996992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-996992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-996992 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-996992 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-996992 event: Registered Node addons-996992 in Controller
	
	
	==> dmesg <==
	[  +6.057467] kauditd_printk_skb: 65 callbacks suppressed
	[ +26.543298] kauditd_printk_skb: 4 callbacks suppressed
	[Sep14 16:46] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.726173] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.858248] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.366113] kauditd_printk_skb: 49 callbacks suppressed
	[  +7.648867] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.829438] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.753456] kauditd_printk_skb: 16 callbacks suppressed
	[Sep14 16:47] kauditd_printk_skb: 40 callbacks suppressed
	[Sep14 16:48] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:49] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 16:54] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.088825] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.292544] kauditd_printk_skb: 15 callbacks suppressed
	[Sep14 16:55] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.127400] kauditd_printk_skb: 12 callbacks suppressed
	[ +26.498820] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.490747] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.865254] kauditd_printk_skb: 29 callbacks suppressed
	[Sep14 16:56] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.046455] kauditd_printk_skb: 17 callbacks suppressed
	[Sep14 16:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.308277] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [244c994b666b95b76ae5dd25d00b91d997db1974a4a83407b48f9d78d71cf309] <==
	{"level":"info","ts":"2024-09-14T16:46:43.987957Z","caller":"traceutil/trace.go:171","msg":"trace[82175678] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1170; }","duration":"310.494191ms","start":"2024-09-14T16:46:43.677445Z","end":"2024-09-14T16:46:43.987939Z","steps":["trace[82175678] 'range keys from in-memory index tree'  (duration: 310.27123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:46:43.988032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T16:46:43.677409Z","time spent":"310.610725ms","remote":"127.0.0.1:39968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-14T16:46:43.988534Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.452534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-14T16:46:43.988944Z","caller":"traceutil/trace.go:171","msg":"trace[925455638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1170; }","duration":"100.863861ms","start":"2024-09-14T16:46:43.888062Z","end":"2024-09-14T16:46:43.988926Z","steps":["trace[925455638] 'range keys from in-memory index tree'  (duration: 100.282057ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:01.401239Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1531}
	{"level":"info","ts":"2024-09-14T16:55:01.453343Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1531,"took":"51.605059ms","hash":2194584676,"current-db-size-bytes":6504448,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3567616,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-09-14T16:55:01.453889Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2194584676,"revision":1531,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T16:55:06.791539Z","caller":"traceutil/trace.go:171","msg":"trace[1480773213] linearizableReadLoop","detail":"{readStateIndex:2215; appliedIndex:2214; }","duration":"121.517894ms","start":"2024-09-14T16:55:06.669994Z","end":"2024-09-14T16:55:06.791512Z","steps":["trace[1480773213] 'read index received'  (duration: 121.338219ms)","trace[1480773213] 'applied index is now lower than readState.Index'  (duration: 179.198µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T16:55:06.791772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.728846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:5015"}
	{"level":"info","ts":"2024-09-14T16:55:06.791805Z","caller":"traceutil/trace.go:171","msg":"trace[783623875] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:2062; }","duration":"121.808688ms","start":"2024-09-14T16:55:06.669990Z","end":"2024-09-14T16:55:06.791799Z","steps":["trace[783623875] 'agreement among raft nodes before linearized reading'  (duration: 121.605717ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:06.792045Z","caller":"traceutil/trace.go:171","msg":"trace[1054613937] transaction","detail":"{read_only:false; response_revision:2062; number_of_response:1; }","duration":"147.610829ms","start":"2024-09-14T16:55:06.644423Z","end":"2024-09-14T16:55:06.792034Z","steps":["trace[1054613937] 'process raft request'  (duration: 146.958073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:55:10.248913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.394646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:55:10.249062Z","caller":"traceutil/trace.go:171","msg":"trace[2140105305] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2098; }","duration":"171.569728ms","start":"2024-09-14T16:55:10.077481Z","end":"2024-09-14T16:55:10.249051Z","steps":["trace[2140105305] 'agreement among raft nodes before linearized reading'  (duration: 171.37003ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:55:10.248791Z","caller":"traceutil/trace.go:171","msg":"trace[1932753122] linearizableReadLoop","detail":"{readStateIndex:2253; appliedIndex:2252; }","duration":"171.2342ms","start":"2024-09-14T16:55:10.077485Z","end":"2024-09-14T16:55:10.248719Z","steps":["trace[1932753122] 'read index received'  (duration: 72.664081ms)","trace[1932753122] 'applied index is now lower than readState.Index'  (duration: 98.569685ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T16:56:09.031263Z","caller":"traceutil/trace.go:171","msg":"trace[968631491] transaction","detail":"{read_only:false; response_revision:2462; number_of_response:1; }","duration":"194.954715ms","start":"2024-09-14T16:56:08.836251Z","end":"2024-09-14T16:56:09.031206Z","steps":["trace[968631491] 'process raft request'  (duration: 194.591625ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:56:14.161056Z","caller":"traceutil/trace.go:171","msg":"trace[1645145771] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2492; }","duration":"117.987268ms","start":"2024-09-14T16:56:14.043018Z","end":"2024-09-14T16:56:14.161005Z","steps":["trace[1645145771] 'process raft request'  (duration: 117.843678ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:56:39.509607Z","caller":"traceutil/trace.go:171","msg":"trace[1438413927] linearizableReadLoop","detail":"{readStateIndex:2741; appliedIndex:2740; }","duration":"218.528286ms","start":"2024-09-14T16:56:39.291061Z","end":"2024-09-14T16:56:39.509590Z","steps":["trace[1438413927] 'read index received'  (duration: 218.376973ms)","trace[1438413927] 'applied index is now lower than readState.Index'  (duration: 150.861µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T16:56:39.509718Z","caller":"traceutil/trace.go:171","msg":"trace[559074960] transaction","detail":"{read_only:false; response_revision:2557; number_of_response:1; }","duration":"218.760615ms","start":"2024-09-14T16:56:39.290951Z","end":"2024-09-14T16:56:39.509711Z","steps":["trace[559074960] 'process raft request'  (duration: 218.52238ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:56:39.509952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.330431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:56:39.511494Z","caller":"traceutil/trace.go:171","msg":"trace[1897118407] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2557; }","duration":"201.920066ms","start":"2024-09-14T16:56:39.309561Z","end":"2024-09-14T16:56:39.511481Z","steps":["trace[1897118407] 'agreement among raft nodes before linearized reading'  (duration: 200.2988ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:56:39.510033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.954739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-14T16:56:39.511680Z","caller":"traceutil/trace.go:171","msg":"trace[1514496080] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:2557; }","duration":"220.581328ms","start":"2024-09-14T16:56:39.291058Z","end":"2024-09-14T16:56:39.511639Z","steps":["trace[1514496080] 'agreement among raft nodes before linearized reading'  (duration: 218.930568ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:00:01.408192Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2015}
	{"level":"info","ts":"2024-09-14T17:00:01.430520Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2015,"took":"21.569336ms","hash":1920824877,"current-db-size-bytes":6504448,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":5132288,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-14T17:00:01.430624Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1920824877,"revision":2015,"compact-revision":1531}
	
	
	==> gcp-auth [b1fc29dced5ee78fdeef5ed32bd7516882b6f3d25bf63964b7313e5b1a180188] <==
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:54:48 Ready to marshal response ...
	2024/09/14 16:54:48 Ready to write response ...
	2024/09/14 16:54:48 Ready to marshal response ...
	2024/09/14 16:54:48 Ready to write response ...
	2024/09/14 16:54:58 Ready to marshal response ...
	2024/09/14 16:54:58 Ready to write response ...
	2024/09/14 16:54:59 Ready to marshal response ...
	2024/09/14 16:54:59 Ready to write response ...
	2024/09/14 16:55:00 Ready to marshal response ...
	2024/09/14 16:55:00 Ready to write response ...
	2024/09/14 16:55:02 Ready to marshal response ...
	2024/09/14 16:55:02 Ready to write response ...
	2024/09/14 16:55:27 Ready to marshal response ...
	2024/09/14 16:55:27 Ready to write response ...
	2024/09/14 16:55:45 Ready to marshal response ...
	2024/09/14 16:55:45 Ready to write response ...
	2024/09/14 16:56:03 Ready to marshal response ...
	2024/09/14 16:56:03 Ready to write response ...
	2024/09/14 16:56:03 Ready to marshal response ...
	2024/09/14 16:56:03 Ready to write response ...
	2024/09/14 16:56:03 Ready to marshal response ...
	2024/09/14 16:56:03 Ready to write response ...
	2024/09/14 16:58:08 Ready to marshal response ...
	2024/09/14 16:58:08 Ready to write response ...
	
	
	==> kernel <==
	 17:00:29 up 15 min,  0 users,  load average: 0.13, 0.49, 0.51
	Linux addons-996992 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9e180103456d123610ca2b41dad9814f3b54f68e8eaaea458cbea9621834f9f2] <==
	E0914 16:47:05.677339       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.47.80:443: connect: connection refused" logger="UnhandledError"
	E0914 16:47:05.687323       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.47.80:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.47.80:443: connect: connection refused" logger="UnhandledError"
	I0914 16:47:05.827073       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0914 16:55:17.021558       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0914 16:55:17.639974       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:45.238956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.239007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.263430       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.263481       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.291930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.291979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.299265       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.299310       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.371396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:55:45.371502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:55:45.847374       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0914 16:55:46.052267       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.33.252"}
	W0914 16:55:46.300335       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 16:55:46.379369       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 16:55:46.416041       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0914 16:55:51.311005       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0914 16:55:52.345185       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0914 16:56:03.180773       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.186.11"}
	I0914 16:58:08.607446       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.24.185"}
	E0914 16:58:10.257344       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [b6da48572a3f2741574a1da7660f53f17bb817d3e49e25fd9f41ecf486aa65d5] <==
	I0914 16:58:20.916889       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0914 16:58:23.062356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:58:23.062456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:58:33.343629       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:58:33.343826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:58:34.667037       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:58:34.667122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:58:39.911022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:58:39.911117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:58:42.443051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-996992"
	W0914 16:59:16.385170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:59:16.385329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:59:24.794814       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:59:24.795001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:59:32.056733       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:59:32.056788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:59:39.263501       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:59:39.263547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:59:59.296218       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:59:59.296358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 17:00:16.257828       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 17:00:16.257947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 17:00:16.840515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 17:00:16.840681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 17:00:28.294823       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="11.839µs"
	
	
	==> kube-proxy [7636b49f23d35f5a9fc075a76348fc31ad18da955d4e73fbad3d24534ef9282a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 16:45:15.590221       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 16:45:15.599785       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	E0914 16:45:15.599893       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 16:45:15.658278       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 16:45:15.658320       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 16:45:15.658346       1 server_linux.go:169] "Using iptables Proxier"
	I0914 16:45:15.663334       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 16:45:15.663614       1 server.go:483] "Version info" version="v1.31.1"
	I0914 16:45:15.663626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 16:45:15.666732       1 config.go:199] "Starting service config controller"
	I0914 16:45:15.666758       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 16:45:15.666776       1 config.go:105] "Starting endpoint slice config controller"
	I0914 16:45:15.666780       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 16:45:15.667288       1 config.go:328] "Starting node config controller"
	I0914 16:45:15.667296       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 16:45:15.768165       1 shared_informer.go:320] Caches are synced for node config
	I0914 16:45:15.768221       1 shared_informer.go:320] Caches are synced for service config
	I0914 16:45:15.768261       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62ccf13035320215b184582540baa2f498de301a8e3c9d4e3ad1b2a3595c2c8d] <==
	W0914 16:45:03.820736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 16:45:03.820857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.832104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 16:45:03.832138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.843716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:03.843762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.866418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 16:45:03.866491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.875513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 16:45:03.875608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:03.916659       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 16:45:03.917144       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 16:45:03.954059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 16:45:03.954146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.032670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:04.032716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.080506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 16:45:04.080598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.114758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 16:45:04.115807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.126730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 16:45:04.126899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 16:45:04.178995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 16:45:04.179383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0914 16:45:06.562975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 16:59:56 addons-996992 kubelet[1212]: E0914 16:59:56.009236    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333196008761635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:02 addons-996992 kubelet[1212]: E0914 17:00:02.608488    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9262e4af-385c-4c58-a62e-b55a378ea465"
	Sep 14 17:00:05 addons-996992 kubelet[1212]: E0914 17:00:05.622477    1212 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:00:05 addons-996992 kubelet[1212]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:00:05 addons-996992 kubelet[1212]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:00:05 addons-996992 kubelet[1212]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:00:05 addons-996992 kubelet[1212]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:00:06 addons-996992 kubelet[1212]: E0914 17:00:06.011604    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333206011195286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:06 addons-996992 kubelet[1212]: E0914 17:00:06.011645    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333206011195286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:16 addons-996992 kubelet[1212]: E0914 17:00:16.014278    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333216013847250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:16 addons-996992 kubelet[1212]: E0914 17:00:16.014574    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333216013847250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:17 addons-996992 kubelet[1212]: E0914 17:00:17.607861    1212 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9262e4af-385c-4c58-a62e-b55a378ea465"
	Sep 14 17:00:26 addons-996992 kubelet[1212]: E0914 17:00:26.017141    1212 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333226016723026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:26 addons-996992 kubelet[1212]: E0914 17:00:26.017186    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333226016723026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579802,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:00:28 addons-996992 kubelet[1212]: I0914 17:00:28.327370    1212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-lf7nc" podStartSLOduration=138.056218786 podStartE2EDuration="2m20.327335795s" podCreationTimestamp="2024-09-14 16:58:08 +0000 UTC" firstStartedPulling="2024-09-14 16:58:08.991214413 +0000 UTC m=+783.511672248" lastFinishedPulling="2024-09-14 16:58:11.262331424 +0000 UTC m=+785.782789257" observedRunningTime="2024-09-14 16:58:12.251970857 +0000 UTC m=+786.772428710" watchObservedRunningTime="2024-09-14 17:00:28.327335795 +0000 UTC m=+922.847793648"
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.775069    1212 scope.go:117] "RemoveContainer" containerID="e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8"
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.792867    1212 scope.go:117] "RemoveContainer" containerID="e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8"
	Sep 14 17:00:29 addons-996992 kubelet[1212]: E0914 17:00:29.793619    1212 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8\": container with ID starting with e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8 not found: ID does not exist" containerID="e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8"
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.793669    1212 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8"} err="failed to get container status \"e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8\": rpc error: code = NotFound desc = could not find container \"e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8\": container with ID starting with e8c78f14b17e7a8079ad435e0d378a510d0f1a1d0e67507457d582229b0dc7f8 not found: ID does not exist"
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.812192    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clm7s\" (UniqueName: \"kubernetes.io/projected/5adc8bfb-2fb3-4e13-8b04-98e98afe35a9-kube-api-access-clm7s\") pod \"5adc8bfb-2fb3-4e13-8b04-98e98afe35a9\" (UID: \"5adc8bfb-2fb3-4e13-8b04-98e98afe35a9\") "
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.812250    1212 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5adc8bfb-2fb3-4e13-8b04-98e98afe35a9-tmp-dir\") pod \"5adc8bfb-2fb3-4e13-8b04-98e98afe35a9\" (UID: \"5adc8bfb-2fb3-4e13-8b04-98e98afe35a9\") "
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.812688    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5adc8bfb-2fb3-4e13-8b04-98e98afe35a9-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "5adc8bfb-2fb3-4e13-8b04-98e98afe35a9" (UID: "5adc8bfb-2fb3-4e13-8b04-98e98afe35a9"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.823355    1212 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adc8bfb-2fb3-4e13-8b04-98e98afe35a9-kube-api-access-clm7s" (OuterVolumeSpecName: "kube-api-access-clm7s") pod "5adc8bfb-2fb3-4e13-8b04-98e98afe35a9" (UID: "5adc8bfb-2fb3-4e13-8b04-98e98afe35a9"). InnerVolumeSpecName "kube-api-access-clm7s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.912759    1212 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-clm7s\" (UniqueName: \"kubernetes.io/projected/5adc8bfb-2fb3-4e13-8b04-98e98afe35a9-kube-api-access-clm7s\") on node \"addons-996992\" DevicePath \"\""
	Sep 14 17:00:29 addons-996992 kubelet[1212]: I0914 17:00:29.912810    1212 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5adc8bfb-2fb3-4e13-8b04-98e98afe35a9-tmp-dir\") on node \"addons-996992\" DevicePath \"\""
	
	
	==> storage-provisioner [7f90cf12b43136552be3a9facb2fe5b7e9005a06d15f676d8f77c8fc53ddcb5a] <==
	I0914 16:45:18.537690       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 16:45:18.556796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 16:45:18.556868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 16:45:18.586989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 16:45:18.587718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89c4a434-eabc-4a8a-9f14-9375f68755f8", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d became leader
	I0914 16:45:18.587761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d!
	I0914 16:45:18.789501       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-996992_e9eca151-6b6c-4161-b461-f6f0cd55060d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-996992 -n addons-996992
helpers_test.go:261: (dbg) Run:  kubectl --context addons-996992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-996992 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-996992 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-996992/192.168.39.189
	Start Time:       Sat, 14 Sep 2024 16:46:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtsq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6dtsq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-996992
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m39s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (323.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 node stop m02 -v=7 --alsologtostderr
E0914 17:10:26.885684   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:11:45.625802   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:11:48.807832   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.466788576s)

                                                
                                                
-- stdout --
	* Stopping node "ha-929592-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:09:49.739524   31530 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:09:49.739683   31530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:09:49.739694   31530 out.go:358] Setting ErrFile to fd 2...
	I0914 17:09:49.739698   31530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:09:49.739867   31530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:09:49.740130   31530 mustload.go:65] Loading cluster: ha-929592
	I0914 17:09:49.740693   31530 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:09:49.740721   31530 stop.go:39] StopHost: ha-929592-m02
	I0914 17:09:49.741169   31530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:09:49.741211   31530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:09:49.757895   31530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0914 17:09:49.758438   31530 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:09:49.758997   31530 main.go:141] libmachine: Using API Version  1
	I0914 17:09:49.759024   31530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:09:49.759416   31530 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:09:49.761505   31530 out.go:177] * Stopping node "ha-929592-m02"  ...
	I0914 17:09:49.762878   31530 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 17:09:49.762940   31530 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:09:49.763307   31530 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 17:09:49.763361   31530 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:09:49.766651   31530 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:09:49.766985   31530 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:09:49.767016   31530 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:09:49.767218   31530 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:09:49.767440   31530 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:09:49.767598   31530 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:09:49.767744   31530 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:09:49.858609   31530 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 17:09:49.912552   31530 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 17:09:49.966557   31530 main.go:141] libmachine: Stopping "ha-929592-m02"...
	I0914 17:09:49.966600   31530 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:09:49.967867   31530 main.go:141] libmachine: (ha-929592-m02) Calling .Stop
	I0914 17:09:49.971685   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 0/120
	I0914 17:09:50.972968   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 1/120
	I0914 17:09:51.974379   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 2/120
	I0914 17:09:52.976486   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 3/120
	I0914 17:09:53.977688   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 4/120
	I0914 17:09:54.979642   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 5/120
	I0914 17:09:55.981264   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 6/120
	I0914 17:09:56.982635   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 7/120
	I0914 17:09:57.984577   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 8/120
	I0914 17:09:58.985692   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 9/120
	I0914 17:09:59.988010   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 10/120
	I0914 17:10:00.989470   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 11/120
	I0914 17:10:01.990831   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 12/120
	I0914 17:10:02.992309   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 13/120
	I0914 17:10:03.993990   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 14/120
	I0914 17:10:04.995791   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 15/120
	I0914 17:10:05.997128   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 16/120
	I0914 17:10:06.998485   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 17/120
	I0914 17:10:07.999919   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 18/120
	I0914 17:10:09.001318   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 19/120
	I0914 17:10:10.003406   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 20/120
	I0914 17:10:11.004880   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 21/120
	I0914 17:10:12.007015   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 22/120
	I0914 17:10:13.008708   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 23/120
	I0914 17:10:14.010116   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 24/120
	I0914 17:10:15.011780   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 25/120
	I0914 17:10:16.013024   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 26/120
	I0914 17:10:17.014359   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 27/120
	I0914 17:10:18.015573   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 28/120
	I0914 17:10:19.016925   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 29/120
	I0914 17:10:20.018819   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 30/120
	I0914 17:10:21.020110   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 31/120
	I0914 17:10:22.021449   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 32/120
	I0914 17:10:23.022846   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 33/120
	I0914 17:10:24.024870   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 34/120
	I0914 17:10:25.026528   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 35/120
	I0914 17:10:26.028791   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 36/120
	I0914 17:10:27.030146   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 37/120
	I0914 17:10:28.031514   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 38/120
	I0914 17:10:29.032886   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 39/120
	I0914 17:10:30.035094   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 40/120
	I0914 17:10:31.036652   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 41/120
	I0914 17:10:32.038206   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 42/120
	I0914 17:10:33.039965   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 43/120
	I0914 17:10:34.041194   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 44/120
	I0914 17:10:35.043246   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 45/120
	I0914 17:10:36.044553   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 46/120
	I0914 17:10:37.046095   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 47/120
	I0914 17:10:38.047437   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 48/120
	I0914 17:10:39.048784   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 49/120
	I0914 17:10:40.050870   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 50/120
	I0914 17:10:41.052120   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 51/120
	I0914 17:10:42.053713   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 52/120
	I0914 17:10:43.055224   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 53/120
	I0914 17:10:44.056599   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 54/120
	I0914 17:10:45.058831   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 55/120
	I0914 17:10:46.060853   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 56/120
	I0914 17:10:47.062366   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 57/120
	I0914 17:10:48.064815   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 58/120
	I0914 17:10:49.066356   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 59/120
	I0914 17:10:50.068213   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 60/120
	I0914 17:10:51.070083   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 61/120
	I0914 17:10:52.071282   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 62/120
	I0914 17:10:53.072703   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 63/120
	I0914 17:10:54.073957   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 64/120
	I0914 17:10:55.075805   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 65/120
	I0914 17:10:56.077839   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 66/120
	I0914 17:10:57.079727   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 67/120
	I0914 17:10:58.081170   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 68/120
	I0914 17:10:59.082420   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 69/120
	I0914 17:11:00.084341   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 70/120
	I0914 17:11:01.085709   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 71/120
	I0914 17:11:02.087156   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 72/120
	I0914 17:11:03.088410   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 73/120
	I0914 17:11:04.089879   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 74/120
	I0914 17:11:05.091871   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 75/120
	I0914 17:11:06.094108   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 76/120
	I0914 17:11:07.095552   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 77/120
	I0914 17:11:08.097351   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 78/120
	I0914 17:11:09.098802   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 79/120
	I0914 17:11:10.100641   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 80/120
	I0914 17:11:11.102004   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 81/120
	I0914 17:11:12.103520   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 82/120
	I0914 17:11:13.105027   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 83/120
	I0914 17:11:14.106580   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 84/120
	I0914 17:11:15.108772   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 85/120
	I0914 17:11:16.110363   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 86/120
	I0914 17:11:17.111736   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 87/120
	I0914 17:11:18.113386   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 88/120
	I0914 17:11:19.114771   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 89/120
	I0914 17:11:20.116706   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 90/120
	I0914 17:11:21.117971   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 91/120
	I0914 17:11:22.119434   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 92/120
	I0914 17:11:23.120726   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 93/120
	I0914 17:11:24.122000   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 94/120
	I0914 17:11:25.123606   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 95/120
	I0914 17:11:26.125118   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 96/120
	I0914 17:11:27.127033   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 97/120
	I0914 17:11:28.128896   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 98/120
	I0914 17:11:29.131300   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 99/120
	I0914 17:11:30.133246   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 100/120
	I0914 17:11:31.134693   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 101/120
	I0914 17:11:32.136416   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 102/120
	I0914 17:11:33.137930   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 103/120
	I0914 17:11:34.139487   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 104/120
	I0914 17:11:35.140815   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 105/120
	I0914 17:11:36.142020   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 106/120
	I0914 17:11:37.143275   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 107/120
	I0914 17:11:38.144721   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 108/120
	I0914 17:11:39.146103   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 109/120
	I0914 17:11:40.148269   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 110/120
	I0914 17:11:41.149618   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 111/120
	I0914 17:11:42.151123   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 112/120
	I0914 17:11:43.153580   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 113/120
	I0914 17:11:44.154890   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 114/120
	I0914 17:11:45.156754   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 115/120
	I0914 17:11:46.158027   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 116/120
	I0914 17:11:47.159415   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 117/120
	I0914 17:11:48.160721   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 118/120
	I0914 17:11:49.162375   31530 main.go:141] libmachine: (ha-929592-m02) Waiting for machine to stop 119/120
	I0914 17:11:50.163498   31530 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 17:11:50.163639   31530 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-929592 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (19.190469365s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:11:50.207353   31960 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:11:50.207623   31960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:11:50.207632   31960 out.go:358] Setting ErrFile to fd 2...
	I0914 17:11:50.207636   31960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:11:50.207863   31960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:11:50.208084   31960 out.go:352] Setting JSON to false
	I0914 17:11:50.208118   31960 mustload.go:65] Loading cluster: ha-929592
	I0914 17:11:50.208262   31960 notify.go:220] Checking for updates...
	I0914 17:11:50.208668   31960 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:11:50.208688   31960 status.go:255] checking status of ha-929592 ...
	I0914 17:11:50.209239   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:11:50.209305   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:11:50.225934   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0914 17:11:50.226601   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:11:50.227246   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:11:50.227306   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:11:50.227687   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:11:50.227874   31960 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:11:50.229628   31960 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:11:50.229645   31960 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:11:50.229926   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:11:50.229967   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:11:50.245175   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42925
	I0914 17:11:50.245759   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:11:50.246299   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:11:50.246322   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:11:50.246618   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:11:50.246814   31960 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:11:50.249651   31960 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:11:50.250196   31960 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:11:50.250227   31960 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:11:50.250386   31960 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:11:50.250663   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:11:50.250701   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:11:50.265715   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I0914 17:11:50.266136   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:11:50.266646   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:11:50.266682   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:11:50.266984   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:11:50.267148   31960 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:11:50.267307   31960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:11:50.267331   31960 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:11:50.270088   31960 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:11:50.270631   31960 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:11:50.270673   31960 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:11:50.270832   31960 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:11:50.270980   31960 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:11:50.271119   31960 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:11:50.271258   31960 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:11:50.358767   31960 ssh_runner.go:195] Run: systemctl --version
	I0914 17:11:50.365484   31960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:11:50.383170   31960 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:11:50.383208   31960 api_server.go:166] Checking apiserver status ...
	I0914 17:11:50.383240   31960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:11:50.401117   31960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:11:50.412732   31960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:11:50.412779   31960 ssh_runner.go:195] Run: ls
	I0914 17:11:50.416908   31960 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:11:50.421919   31960 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:11:50.421945   31960 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:11:50.421955   31960 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:11:50.421982   31960 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:11:50.422295   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:11:50.422329   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:11:50.437231   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I0914 17:11:50.437732   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:11:50.438277   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:11:50.438297   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:11:50.438657   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:11:50.438838   31960 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:11:50.440857   31960 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:11:50.440875   31960 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:11:50.441195   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:11:50.441242   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:11:50.457251   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I0914 17:11:50.457771   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:11:50.458401   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:11:50.458427   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:11:50.458785   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:11:50.459029   31960 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:11:50.462303   31960 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:11:50.462864   31960 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:11:50.462902   31960 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:11:50.463083   31960 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:11:50.463417   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:11:50.463463   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:11:50.478644   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0914 17:11:50.479149   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:11:50.479589   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:11:50.479603   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:11:50.479906   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:11:50.480106   31960 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:11:50.480288   31960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:11:50.480311   31960 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:11:50.483490   31960 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:11:50.483956   31960 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:11:50.483998   31960 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:11:50.484138   31960 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:11:50.484408   31960 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:11:50.484556   31960 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:11:50.484694   31960 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:08.994430   31960 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:08.994538   31960 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:08.994561   31960 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:08.994574   31960 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:08.994594   31960 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:08.994626   31960 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:08.995041   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:08.995100   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:09.010031   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I0914 17:12:09.010510   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:09.011050   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:12:09.011078   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:09.011409   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:09.011677   31960 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:09.013420   31960 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:09.013434   31960 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:09.013719   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:09.013768   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:09.028539   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0914 17:12:09.028976   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:09.029563   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:12:09.029583   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:09.029937   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:09.030214   31960 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:09.033017   31960 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:09.033520   31960 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:09.033547   31960 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:09.033634   31960 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:09.033945   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:09.033980   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:09.049303   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I0914 17:12:09.049769   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:09.050287   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:12:09.050304   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:09.050578   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:09.050773   31960 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:09.051003   31960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:09.051028   31960 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:09.053696   31960 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:09.054105   31960 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:09.054136   31960 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:09.054356   31960 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:09.054527   31960 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:09.054663   31960 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:09.054775   31960 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:09.139660   31960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:09.157563   31960 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:09.157591   31960 api_server.go:166] Checking apiserver status ...
	I0914 17:12:09.157621   31960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:09.177506   31960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:09.187334   31960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:09.187391   31960 ssh_runner.go:195] Run: ls
	I0914 17:12:09.192522   31960 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:09.197283   31960 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:09.197309   31960 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:09.197321   31960 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:09.197339   31960 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:09.197641   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:09.197686   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:09.213799   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0914 17:12:09.214326   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:09.214826   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:12:09.214846   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:09.215164   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:09.215364   31960 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:09.216867   31960 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:09.216892   31960 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:09.217209   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:09.217252   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:09.232102   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0914 17:12:09.232506   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:09.233004   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:12:09.233023   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:09.233318   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:09.233519   31960 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:09.236425   31960 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:09.236849   31960 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:09.236875   31960 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:09.237000   31960 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:09.237288   31960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:09.237329   31960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:09.253236   31960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0914 17:12:09.253623   31960 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:09.254098   31960 main.go:141] libmachine: Using API Version  1
	I0914 17:12:09.254123   31960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:09.254517   31960 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:09.254700   31960 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:09.254865   31960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:09.254890   31960 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:09.257789   31960 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:09.258216   31960 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:09.258241   31960 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:09.258370   31960 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:09.258522   31960 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:09.258672   31960 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:09.258789   31960 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:09.338686   31960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:09.354081   31960 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-929592 -n ha-929592
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-929592 logs -n 25: (1.37271853s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592:/home/docker/cp-test_ha-929592-m03_ha-929592.txt                      |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592 sudo cat                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592.txt                                |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m04 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp testdata/cp-test.txt                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592:/home/docker/cp-test_ha-929592-m04_ha-929592.txt                      |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592 sudo cat                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592.txt                                |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03:/home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m03 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-929592 node stop m02 -v=7                                                    | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:04:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:04:52.362054   27433 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:04:52.362146   27433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:52.362153   27433 out.go:358] Setting ErrFile to fd 2...
	I0914 17:04:52.362178   27433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:52.362345   27433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:04:52.362903   27433 out.go:352] Setting JSON to false
	I0914 17:04:52.363751   27433 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2836,"bootTime":1726330656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:04:52.363836   27433 start.go:139] virtualization: kvm guest
	I0914 17:04:52.365931   27433 out.go:177] * [ha-929592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:04:52.367340   27433 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:04:52.367368   27433 notify.go:220] Checking for updates...
	I0914 17:04:52.369803   27433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:04:52.371197   27433 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:04:52.372343   27433 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:52.373702   27433 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:04:52.375185   27433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:04:52.376686   27433 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:04:52.411200   27433 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 17:04:52.412455   27433 start.go:297] selected driver: kvm2
	I0914 17:04:52.412471   27433 start.go:901] validating driver "kvm2" against <nil>
	I0914 17:04:52.412482   27433 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:04:52.413158   27433 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:04:52.413241   27433 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:04:52.428264   27433 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:04:52.428311   27433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:04:52.428555   27433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:04:52.428590   27433 cni.go:84] Creating CNI manager for ""
	I0914 17:04:52.428628   27433 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0914 17:04:52.428637   27433 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 17:04:52.428695   27433 start.go:340] cluster config:
	{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0914 17:04:52.428780   27433 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:04:52.430437   27433 out.go:177] * Starting "ha-929592" primary control-plane node in "ha-929592" cluster
	I0914 17:04:52.431767   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:04:52.431815   27433 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 17:04:52.431830   27433 cache.go:56] Caching tarball of preloaded images
	I0914 17:04:52.431915   27433 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:04:52.431928   27433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:04:52.432228   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:04:52.432252   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json: {Name:mk927977c49e49be76a6abcc15d8cb1926577c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:04:52.432402   27433 start.go:360] acquireMachinesLock for ha-929592: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:04:52.432445   27433 start.go:364] duration metric: took 26.853µs to acquireMachinesLock for "ha-929592"
	I0914 17:04:52.432468   27433 start.go:93] Provisioning new machine with config: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:04:52.432530   27433 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 17:04:52.434080   27433 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:04:52.434231   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:04:52.434275   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:04:52.448453   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0914 17:04:52.448925   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:04:52.449473   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:04:52.449492   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:04:52.449795   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:04:52.449949   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:04:52.450074   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:04:52.450204   27433 start.go:159] libmachine.API.Create for "ha-929592" (driver="kvm2")
	I0914 17:04:52.450257   27433 client.go:168] LocalClient.Create starting
	I0914 17:04:52.450297   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:04:52.450339   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:04:52.450352   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:04:52.450410   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:04:52.450428   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:04:52.450446   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:04:52.450462   27433 main.go:141] libmachine: Running pre-create checks...
	I0914 17:04:52.450469   27433 main.go:141] libmachine: (ha-929592) Calling .PreCreateCheck
	I0914 17:04:52.450755   27433 main.go:141] libmachine: (ha-929592) Calling .GetConfigRaw
	I0914 17:04:52.451089   27433 main.go:141] libmachine: Creating machine...
	I0914 17:04:52.451101   27433 main.go:141] libmachine: (ha-929592) Calling .Create
	I0914 17:04:52.451265   27433 main.go:141] libmachine: (ha-929592) Creating KVM machine...
	I0914 17:04:52.452544   27433 main.go:141] libmachine: (ha-929592) DBG | found existing default KVM network
	I0914 17:04:52.453240   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.453090   27456 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I0914 17:04:52.453254   27433 main.go:141] libmachine: (ha-929592) DBG | created network xml: 
	I0914 17:04:52.453263   27433 main.go:141] libmachine: (ha-929592) DBG | <network>
	I0914 17:04:52.453268   27433 main.go:141] libmachine: (ha-929592) DBG |   <name>mk-ha-929592</name>
	I0914 17:04:52.453273   27433 main.go:141] libmachine: (ha-929592) DBG |   <dns enable='no'/>
	I0914 17:04:52.453277   27433 main.go:141] libmachine: (ha-929592) DBG |   
	I0914 17:04:52.453282   27433 main.go:141] libmachine: (ha-929592) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0914 17:04:52.453287   27433 main.go:141] libmachine: (ha-929592) DBG |     <dhcp>
	I0914 17:04:52.453296   27433 main.go:141] libmachine: (ha-929592) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0914 17:04:52.453305   27433 main.go:141] libmachine: (ha-929592) DBG |     </dhcp>
	I0914 17:04:52.453332   27433 main.go:141] libmachine: (ha-929592) DBG |   </ip>
	I0914 17:04:52.453342   27433 main.go:141] libmachine: (ha-929592) DBG |   
	I0914 17:04:52.453348   27433 main.go:141] libmachine: (ha-929592) DBG | </network>
	I0914 17:04:52.453354   27433 main.go:141] libmachine: (ha-929592) DBG | 
	I0914 17:04:52.458689   27433 main.go:141] libmachine: (ha-929592) DBG | trying to create private KVM network mk-ha-929592 192.168.39.0/24...
	I0914 17:04:52.525127   27433 main.go:141] libmachine: (ha-929592) DBG | private KVM network mk-ha-929592 192.168.39.0/24 created
	I0914 17:04:52.525229   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.525091   27456 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:52.525274   27433 main.go:141] libmachine: (ha-929592) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592 ...
	I0914 17:04:52.525325   27433 main.go:141] libmachine: (ha-929592) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:04:52.525357   27433 main.go:141] libmachine: (ha-929592) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:04:52.774096   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.773983   27456 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa...
	I0914 17:04:52.881126   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.880973   27456 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/ha-929592.rawdisk...
	I0914 17:04:52.881154   27433 main.go:141] libmachine: (ha-929592) DBG | Writing magic tar header
	I0914 17:04:52.881164   27433 main.go:141] libmachine: (ha-929592) DBG | Writing SSH key tar header
	I0914 17:04:52.881177   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.881094   27456 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592 ...
	I0914 17:04:52.881188   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592
	I0914 17:04:52.881234   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592 (perms=drwx------)
	I0914 17:04:52.881256   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:04:52.881264   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:04:52.881273   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:52.881279   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:04:52.881285   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:04:52.881291   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:04:52.881298   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:04:52.881309   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:04:52.881316   27433 main.go:141] libmachine: (ha-929592) Creating domain...
	I0914 17:04:52.881324   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:04:52.881329   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:04:52.881354   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home
	I0914 17:04:52.881378   27433 main.go:141] libmachine: (ha-929592) DBG | Skipping /home - not owner
	I0914 17:04:52.882446   27433 main.go:141] libmachine: (ha-929592) define libvirt domain using xml: 
	I0914 17:04:52.882460   27433 main.go:141] libmachine: (ha-929592) <domain type='kvm'>
	I0914 17:04:52.882465   27433 main.go:141] libmachine: (ha-929592)   <name>ha-929592</name>
	I0914 17:04:52.882470   27433 main.go:141] libmachine: (ha-929592)   <memory unit='MiB'>2200</memory>
	I0914 17:04:52.882475   27433 main.go:141] libmachine: (ha-929592)   <vcpu>2</vcpu>
	I0914 17:04:52.882479   27433 main.go:141] libmachine: (ha-929592)   <features>
	I0914 17:04:52.882483   27433 main.go:141] libmachine: (ha-929592)     <acpi/>
	I0914 17:04:52.882486   27433 main.go:141] libmachine: (ha-929592)     <apic/>
	I0914 17:04:52.882491   27433 main.go:141] libmachine: (ha-929592)     <pae/>
	I0914 17:04:52.882499   27433 main.go:141] libmachine: (ha-929592)     
	I0914 17:04:52.882504   27433 main.go:141] libmachine: (ha-929592)   </features>
	I0914 17:04:52.882510   27433 main.go:141] libmachine: (ha-929592)   <cpu mode='host-passthrough'>
	I0914 17:04:52.882515   27433 main.go:141] libmachine: (ha-929592)   
	I0914 17:04:52.882521   27433 main.go:141] libmachine: (ha-929592)   </cpu>
	I0914 17:04:52.882528   27433 main.go:141] libmachine: (ha-929592)   <os>
	I0914 17:04:52.882537   27433 main.go:141] libmachine: (ha-929592)     <type>hvm</type>
	I0914 17:04:52.882571   27433 main.go:141] libmachine: (ha-929592)     <boot dev='cdrom'/>
	I0914 17:04:52.882588   27433 main.go:141] libmachine: (ha-929592)     <boot dev='hd'/>
	I0914 17:04:52.882595   27433 main.go:141] libmachine: (ha-929592)     <bootmenu enable='no'/>
	I0914 17:04:52.882600   27433 main.go:141] libmachine: (ha-929592)   </os>
	I0914 17:04:52.882605   27433 main.go:141] libmachine: (ha-929592)   <devices>
	I0914 17:04:52.882628   27433 main.go:141] libmachine: (ha-929592)     <disk type='file' device='cdrom'>
	I0914 17:04:52.882647   27433 main.go:141] libmachine: (ha-929592)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/boot2docker.iso'/>
	I0914 17:04:52.882656   27433 main.go:141] libmachine: (ha-929592)       <target dev='hdc' bus='scsi'/>
	I0914 17:04:52.882665   27433 main.go:141] libmachine: (ha-929592)       <readonly/>
	I0914 17:04:52.882672   27433 main.go:141] libmachine: (ha-929592)     </disk>
	I0914 17:04:52.882686   27433 main.go:141] libmachine: (ha-929592)     <disk type='file' device='disk'>
	I0914 17:04:52.882693   27433 main.go:141] libmachine: (ha-929592)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:04:52.882714   27433 main.go:141] libmachine: (ha-929592)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/ha-929592.rawdisk'/>
	I0914 17:04:52.882722   27433 main.go:141] libmachine: (ha-929592)       <target dev='hda' bus='virtio'/>
	I0914 17:04:52.882743   27433 main.go:141] libmachine: (ha-929592)     </disk>
	I0914 17:04:52.882758   27433 main.go:141] libmachine: (ha-929592)     <interface type='network'>
	I0914 17:04:52.882772   27433 main.go:141] libmachine: (ha-929592)       <source network='mk-ha-929592'/>
	I0914 17:04:52.882783   27433 main.go:141] libmachine: (ha-929592)       <model type='virtio'/>
	I0914 17:04:52.882792   27433 main.go:141] libmachine: (ha-929592)     </interface>
	I0914 17:04:52.882799   27433 main.go:141] libmachine: (ha-929592)     <interface type='network'>
	I0914 17:04:52.882806   27433 main.go:141] libmachine: (ha-929592)       <source network='default'/>
	I0914 17:04:52.882813   27433 main.go:141] libmachine: (ha-929592)       <model type='virtio'/>
	I0914 17:04:52.882825   27433 main.go:141] libmachine: (ha-929592)     </interface>
	I0914 17:04:52.882838   27433 main.go:141] libmachine: (ha-929592)     <serial type='pty'>
	I0914 17:04:52.882854   27433 main.go:141] libmachine: (ha-929592)       <target port='0'/>
	I0914 17:04:52.882873   27433 main.go:141] libmachine: (ha-929592)     </serial>
	I0914 17:04:52.882886   27433 main.go:141] libmachine: (ha-929592)     <console type='pty'>
	I0914 17:04:52.882898   27433 main.go:141] libmachine: (ha-929592)       <target type='serial' port='0'/>
	I0914 17:04:52.882913   27433 main.go:141] libmachine: (ha-929592)     </console>
	I0914 17:04:52.882926   27433 main.go:141] libmachine: (ha-929592)     <rng model='virtio'>
	I0914 17:04:52.882934   27433 main.go:141] libmachine: (ha-929592)       <backend model='random'>/dev/random</backend>
	I0914 17:04:52.882945   27433 main.go:141] libmachine: (ha-929592)     </rng>
	I0914 17:04:52.882959   27433 main.go:141] libmachine: (ha-929592)     
	I0914 17:04:52.882968   27433 main.go:141] libmachine: (ha-929592)     
	I0914 17:04:52.882983   27433 main.go:141] libmachine: (ha-929592)   </devices>
	I0914 17:04:52.883000   27433 main.go:141] libmachine: (ha-929592) </domain>
	I0914 17:04:52.883015   27433 main.go:141] libmachine: (ha-929592) 
	I0914 17:04:52.887250   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:22:db:e9 in network default
	I0914 17:04:52.887768   27433 main.go:141] libmachine: (ha-929592) Ensuring networks are active...
	I0914 17:04:52.887783   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:52.888465   27433 main.go:141] libmachine: (ha-929592) Ensuring network default is active
	I0914 17:04:52.888708   27433 main.go:141] libmachine: (ha-929592) Ensuring network mk-ha-929592 is active
	I0914 17:04:52.889130   27433 main.go:141] libmachine: (ha-929592) Getting domain xml...
	I0914 17:04:52.889771   27433 main.go:141] libmachine: (ha-929592) Creating domain...
	I0914 17:04:54.076007   27433 main.go:141] libmachine: (ha-929592) Waiting to get IP...
	I0914 17:04:54.076817   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:54.077204   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:54.077232   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:54.077176   27456 retry.go:31] will retry after 289.776154ms: waiting for machine to come up
	I0914 17:04:54.368800   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:54.369197   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:54.369231   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:54.369159   27456 retry.go:31] will retry after 265.691042ms: waiting for machine to come up
	I0914 17:04:54.636587   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:54.637014   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:54.637035   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:54.636957   27456 retry.go:31] will retry after 390.775829ms: waiting for machine to come up
	I0914 17:04:55.029563   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:55.030053   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:55.030087   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:55.030001   27456 retry.go:31] will retry after 506.591115ms: waiting for machine to come up
	I0914 17:04:55.538684   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:55.539180   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:55.539200   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:55.539139   27456 retry.go:31] will retry after 621.472095ms: waiting for machine to come up
	I0914 17:04:56.162029   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:56.162541   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:56.162566   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:56.162479   27456 retry.go:31] will retry after 848.82904ms: waiting for machine to come up
	I0914 17:04:57.013633   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:57.014033   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:57.014061   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:57.013991   27456 retry.go:31] will retry after 880.018076ms: waiting for machine to come up
	I0914 17:04:57.895459   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:57.895811   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:57.895841   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:57.895774   27456 retry.go:31] will retry after 1.44160062s: waiting for machine to come up
	I0914 17:04:59.339444   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:59.339868   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:59.339895   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:59.339826   27456 retry.go:31] will retry after 1.541818405s: waiting for machine to come up
	I0914 17:05:00.883498   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:00.883924   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:00.883952   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:00.883880   27456 retry.go:31] will retry after 1.975015362s: waiting for machine to come up
	I0914 17:05:02.860808   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:02.861230   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:02.861255   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:02.861183   27456 retry.go:31] will retry after 2.375239154s: waiting for machine to come up
	I0914 17:05:05.239145   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:05.239513   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:05.239541   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:05.239466   27456 retry.go:31] will retry after 3.274936242s: waiting for machine to come up
	I0914 17:05:08.516310   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:08.516591   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:08.516616   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:08.516555   27456 retry.go:31] will retry after 3.972681773s: waiting for machine to come up
	I0914 17:05:12.490473   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.490970   27433 main.go:141] libmachine: (ha-929592) Found IP for machine: 192.168.39.54
	I0914 17:05:12.490998   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has current primary IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.491007   27433 main.go:141] libmachine: (ha-929592) Reserving static IP address...
	I0914 17:05:12.491334   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find host DHCP lease matching {name: "ha-929592", mac: "52:54:00:5c:cb:09", ip: "192.168.39.54"} in network mk-ha-929592
	I0914 17:05:12.563614   27433 main.go:141] libmachine: (ha-929592) DBG | Getting to WaitForSSH function...
	I0914 17:05:12.563645   27433 main.go:141] libmachine: (ha-929592) Reserved static IP address: 192.168.39.54
	I0914 17:05:12.563685   27433 main.go:141] libmachine: (ha-929592) Waiting for SSH to be available...
	I0914 17:05:12.566031   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.566381   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.566408   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.566585   27433 main.go:141] libmachine: (ha-929592) DBG | Using SSH client type: external
	I0914 17:05:12.566611   27433 main.go:141] libmachine: (ha-929592) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa (-rw-------)
	I0914 17:05:12.566652   27433 main.go:141] libmachine: (ha-929592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:05:12.566667   27433 main.go:141] libmachine: (ha-929592) DBG | About to run SSH command:
	I0914 17:05:12.566679   27433 main.go:141] libmachine: (ha-929592) DBG | exit 0
	I0914 17:05:12.693896   27433 main.go:141] libmachine: (ha-929592) DBG | SSH cmd err, output: <nil>: 
	I0914 17:05:12.694183   27433 main.go:141] libmachine: (ha-929592) KVM machine creation complete!
	I0914 17:05:12.694564   27433 main.go:141] libmachine: (ha-929592) Calling .GetConfigRaw
	I0914 17:05:12.695129   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:12.695377   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:12.695534   27433 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:05:12.695545   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:12.696807   27433 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:05:12.696834   27433 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:05:12.696840   27433 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:05:12.696848   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:12.699238   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.699685   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.699706   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.699954   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:12.700173   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.700340   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.700444   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:12.700611   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:12.700834   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:12.700846   27433 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:05:12.813402   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:05:12.813419   27433 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:05:12.813429   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:12.816165   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.816480   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.816510   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.816646   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:12.816829   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.816985   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.817152   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:12.817395   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:12.817600   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:12.817612   27433 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:05:12.930731   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:05:12.930824   27433 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:05:12.930835   27433 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:05:12.930843   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:05:12.931142   27433 buildroot.go:166] provisioning hostname "ha-929592"
	I0914 17:05:12.931171   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:05:12.931415   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:12.933748   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.934109   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.934135   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.934298   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:12.934477   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.934649   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.934767   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:12.934902   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:12.935083   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:12.935094   27433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592 && echo "ha-929592" | sudo tee /etc/hostname
	I0914 17:05:13.059342   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592
	
	I0914 17:05:13.059386   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.061780   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.062095   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.062117   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.062309   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.062487   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.062631   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.062767   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.062932   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:13.063135   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:13.063150   27433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:05:13.182217   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:05:13.182265   27433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:05:13.182300   27433 buildroot.go:174] setting up certificates
	I0914 17:05:13.182319   27433 provision.go:84] configureAuth start
	I0914 17:05:13.182336   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:05:13.182615   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:13.184832   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.185124   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.185140   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.185249   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.187224   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.187592   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.187634   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.187774   27433 provision.go:143] copyHostCerts
	I0914 17:05:13.187801   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:05:13.187836   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:05:13.187882   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:05:13.187999   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:05:13.188102   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:05:13.188128   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:05:13.188137   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:05:13.188175   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:05:13.188246   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:05:13.188294   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:05:13.188303   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:05:13.188351   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:05:13.188419   27433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592 san=[127.0.0.1 192.168.39.54 ha-929592 localhost minikube]
	I0914 17:05:13.281204   27433 provision.go:177] copyRemoteCerts
	I0914 17:05:13.281259   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:05:13.281281   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.283676   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.283872   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.283891   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.284055   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.284221   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.284422   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.284519   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:13.372119   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:05:13.372192   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0914 17:05:13.395483   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:05:13.395565   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:05:13.418066   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:05:13.418142   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:05:13.440380   27433 provision.go:87] duration metric: took 258.044352ms to configureAuth
	I0914 17:05:13.440405   27433 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:05:13.440613   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:05:13.440692   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.442993   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.443286   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.443318   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.443526   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.443705   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.443810   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.443949   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.444095   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:13.444283   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:13.444306   27433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:05:13.668767   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:05:13.668796   27433 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:05:13.668809   27433 main.go:141] libmachine: (ha-929592) Calling .GetURL
	I0914 17:05:13.670071   27433 main.go:141] libmachine: (ha-929592) DBG | Using libvirt version 6000000
	I0914 17:05:13.672133   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.672425   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.672453   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.672635   27433 main.go:141] libmachine: Docker is up and running!
	I0914 17:05:13.672649   27433 main.go:141] libmachine: Reticulating splines...
	I0914 17:05:13.672655   27433 client.go:171] duration metric: took 21.222387818s to LocalClient.Create
	I0914 17:05:13.672674   27433 start.go:167] duration metric: took 21.222472014s to libmachine.API.Create "ha-929592"
	I0914 17:05:13.672682   27433 start.go:293] postStartSetup for "ha-929592" (driver="kvm2")
	I0914 17:05:13.672691   27433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:05:13.672705   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.672956   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:05:13.672979   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.674989   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.675256   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.675278   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.675426   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.675576   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.675699   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.675809   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:13.760460   27433 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:05:13.764480   27433 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:05:13.764512   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:05:13.764574   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:05:13.764675   27433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:05:13.764689   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:05:13.764796   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:05:13.773804   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:05:13.802132   27433 start.go:296] duration metric: took 129.43692ms for postStartSetup
	I0914 17:05:13.802201   27433 main.go:141] libmachine: (ha-929592) Calling .GetConfigRaw
	I0914 17:05:13.802929   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:13.805341   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.805638   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.805665   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.805869   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:05:13.806035   27433 start.go:128] duration metric: took 21.373494072s to createHost
	I0914 17:05:13.806054   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.808526   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.808873   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.808900   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.809020   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.809200   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.809343   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.809458   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.809615   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:13.809793   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:13.809806   27433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:05:13.922612   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726333513.897189242
	
	I0914 17:05:13.922637   27433 fix.go:216] guest clock: 1726333513.897189242
	I0914 17:05:13.922645   27433 fix.go:229] Guest: 2024-09-14 17:05:13.897189242 +0000 UTC Remote: 2024-09-14 17:05:13.806045002 +0000 UTC m=+21.477242677 (delta=91.14424ms)
	I0914 17:05:13.922688   27433 fix.go:200] guest clock delta is within tolerance: 91.14424ms
	I0914 17:05:13.922696   27433 start.go:83] releasing machines lock for "ha-929592", held for 21.490239455s
	I0914 17:05:13.922722   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.922955   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:13.925674   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.926017   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.926040   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.926209   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.926806   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.926983   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.927099   27433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:05:13.927145   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.927189   27433 ssh_runner.go:195] Run: cat /version.json
	I0914 17:05:13.927212   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.929964   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930096   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930382   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.930410   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930523   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.930546   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930575   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.930693   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.930769   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.930789   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.930927   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.930932   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.931033   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:13.931078   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:14.039985   27433 ssh_runner.go:195] Run: systemctl --version
	I0914 17:05:14.045861   27433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:05:14.202332   27433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:05:14.208032   27433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:05:14.208097   27433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:05:14.224174   27433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:05:14.224197   27433 start.go:495] detecting cgroup driver to use...
	I0914 17:05:14.224263   27433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:05:14.240804   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:05:14.254062   27433 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:05:14.254113   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:05:14.267269   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:05:14.280412   27433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:05:14.389375   27433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:05:14.542112   27433 docker.go:233] disabling docker service ...
	I0914 17:05:14.542194   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:05:14.555724   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:05:14.567773   27433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:05:14.695885   27433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:05:14.828486   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:05:14.841740   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:05:14.859848   27433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:05:14.859924   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.870387   27433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:05:14.870468   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.880584   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.890449   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.900203   27433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:05:14.910750   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.920469   27433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.936981   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.947452   27433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:05:14.956918   27433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:05:14.956978   27433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:05:14.968884   27433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:05:14.978656   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:05:15.098602   27433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:05:15.183490   27433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:05:15.183560   27433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:05:15.187992   27433 start.go:563] Will wait 60s for crictl version
	I0914 17:05:15.188052   27433 ssh_runner.go:195] Run: which crictl
	I0914 17:05:15.191667   27433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:05:15.229963   27433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:05:15.230059   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:05:15.259743   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:05:15.289467   27433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:05:15.291045   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:15.293584   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:15.293883   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:15.293901   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:15.294141   27433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:05:15.298491   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:05:15.311225   27433 kubeadm.go:883] updating cluster {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:05:15.311331   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:05:15.311373   27433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:05:15.343052   27433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 17:05:15.343113   27433 ssh_runner.go:195] Run: which lz4
	I0914 17:05:15.346935   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0914 17:05:15.347018   27433 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 17:05:15.351018   27433 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 17:05:15.351055   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 17:05:16.543497   27433 crio.go:462] duration metric: took 1.196498878s to copy over tarball
	I0914 17:05:16.543571   27433 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 17:05:18.520730   27433 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977128894s)
	I0914 17:05:18.520768   27433 crio.go:469] duration metric: took 1.977245938s to extract the tarball
	I0914 17:05:18.520779   27433 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 17:05:18.556314   27433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:05:18.598630   27433 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:05:18.598656   27433 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:05:18.598666   27433 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0914 17:05:18.598778   27433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:05:18.598841   27433 ssh_runner.go:195] Run: crio config
	I0914 17:05:18.643561   27433 cni.go:84] Creating CNI manager for ""
	I0914 17:05:18.643580   27433 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 17:05:18.643589   27433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:05:18.643609   27433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-929592 NodeName:ha-929592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:05:18.643735   27433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-929592"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:05:18.643764   27433 kube-vip.go:115] generating kube-vip config ...
	I0914 17:05:18.643803   27433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:05:18.659498   27433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:05:18.659626   27433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:05:18.659687   27433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:05:18.669124   27433 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:05:18.669186   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 17:05:18.678492   27433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0914 17:05:18.694270   27433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:05:18.709635   27433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0914 17:05:18.725145   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0914 17:05:18.740755   27433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:05:18.744332   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:05:18.755630   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:05:18.868873   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:05:18.885268   27433 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.54
	I0914 17:05:18.885293   27433 certs.go:194] generating shared ca certs ...
	I0914 17:05:18.885315   27433 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:18.885509   27433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:05:18.885567   27433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:05:18.885580   27433 certs.go:256] generating profile certs ...
	I0914 17:05:18.885640   27433 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:05:18.885667   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt with IP's: []
	I0914 17:05:19.132478   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt ...
	I0914 17:05:19.132513   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt: {Name:mk54c9566b78ae48c2ae4c2a1b029e7d573c0c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.132674   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key ...
	I0914 17:05:19.132683   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key: {Name:mk4627546c29d8132adefa948bb74cf246c39702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.132757   27433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383
	I0914 17:05:19.132771   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.254]
	I0914 17:05:19.378339   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383 ...
	I0914 17:05:19.378369   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383: {Name:mk917bd493eb4252b59420c304591247a8797944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.378528   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383 ...
	I0914 17:05:19.378542   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383: {Name:mk063999a82be1870a27e4e9637b0675bcfe2750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.378613   27433 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:05:19.378702   27433 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:05:19.378755   27433 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:05:19.378770   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt with IP's: []
	I0914 17:05:19.519778   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt ...
	I0914 17:05:19.519809   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt: {Name:mk26ab7b30268ecdbdb0a5c3970d6da8a5fc24f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.519957   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key ...
	I0914 17:05:19.519967   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key: {Name:mkd9e9e56ad626cbe3ea15682b1f7c52cdbd81c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.520072   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:05:19.520088   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:05:19.520099   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:05:19.520113   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:05:19.520145   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:05:19.520159   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:05:19.520171   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:05:19.520184   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:05:19.520229   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:05:19.520260   27433 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:05:19.520269   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:05:19.520295   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:05:19.520322   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:05:19.520343   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:05:19.520380   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:05:19.520404   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.520422   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.520437   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.520976   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:05:19.545186   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:05:19.568589   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:05:19.593254   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:05:19.616378   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 17:05:19.641070   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:05:19.667208   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:05:19.700432   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:05:19.725080   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:05:19.747406   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:05:19.770099   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:05:19.793257   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:05:19.809344   27433 ssh_runner.go:195] Run: openssl version
	I0914 17:05:19.815041   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:05:19.825746   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.829941   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.829998   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.835403   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:05:19.846249   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:05:19.857197   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.861444   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.861493   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.866827   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:05:19.877466   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:05:19.888231   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.892457   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.892517   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.898027   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:05:19.909064   27433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:05:19.913022   27433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:05:19.913080   27433 kubeadm.go:392] StartCluster: {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:05:19.913140   27433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:05:19.913197   27433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:05:19.953075   27433 cri.go:89] found id: ""
	I0914 17:05:19.953159   27433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 17:05:19.962939   27433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 17:05:19.972418   27433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 17:05:19.981720   27433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 17:05:19.981739   27433 kubeadm.go:157] found existing configuration files:
	
	I0914 17:05:19.981779   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 17:05:19.990455   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 17:05:19.990520   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 17:05:19.999755   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 17:05:20.008502   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 17:05:20.008558   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 17:05:20.017608   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 17:05:20.026183   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 17:05:20.026237   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 17:05:20.035009   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 17:05:20.043331   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 17:05:20.043381   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 17:05:20.052637   27433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 17:05:20.151886   27433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 17:05:20.152003   27433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 17:05:20.270747   27433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 17:05:20.270932   27433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 17:05:20.271051   27433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 17:05:20.279190   27433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 17:05:20.331860   27433 out.go:235]   - Generating certificates and keys ...
	I0914 17:05:20.331982   27433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 17:05:20.332065   27433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 17:05:20.378810   27433 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 17:05:20.487711   27433 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 17:05:20.688491   27433 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 17:05:20.981539   27433 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 17:05:21.067314   27433 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 17:05:21.067685   27433 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-929592 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0914 17:05:21.216228   27433 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 17:05:21.216639   27433 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-929592 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0914 17:05:21.378027   27433 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 17:05:21.815304   27433 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 17:05:21.898368   27433 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 17:05:21.898707   27433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 17:05:22.029236   27433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 17:05:22.119811   27433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 17:05:22.386426   27433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 17:05:22.439748   27433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 17:05:22.702524   27433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 17:05:22.703297   27433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 17:05:22.706959   27433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 17:05:22.708786   27433 out.go:235]   - Booting up control plane ...
	I0914 17:05:22.708887   27433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 17:05:22.710820   27433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 17:05:22.711656   27433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 17:05:22.726607   27433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 17:05:22.732633   27433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 17:05:22.732708   27433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 17:05:22.872776   27433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 17:05:22.872910   27433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 17:05:23.374803   27433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.315802ms
	I0914 17:05:23.374911   27433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 17:05:29.331478   27433 kubeadm.go:310] [api-check] The API server is healthy after 5.958547603s
	I0914 17:05:29.341859   27433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 17:05:29.355652   27433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 17:05:29.384741   27433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 17:05:29.384956   27433 kubeadm.go:310] [mark-control-plane] Marking the node ha-929592 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 17:05:29.403402   27433 kubeadm.go:310] [bootstrap-token] Using token: kz9zjv.9vz6qx71da3375jr
	I0914 17:05:29.404608   27433 out.go:235]   - Configuring RBAC rules ...
	I0914 17:05:29.404755   27433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 17:05:29.412435   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 17:05:29.425683   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 17:05:29.432156   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 17:05:29.435728   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 17:05:29.441992   27433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 17:05:29.741459   27433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 17:05:30.169011   27433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 17:05:30.739086   27433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 17:05:30.739909   27433 kubeadm.go:310] 
	I0914 17:05:30.739982   27433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 17:05:30.739991   27433 kubeadm.go:310] 
	I0914 17:05:30.740112   27433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 17:05:30.740140   27433 kubeadm.go:310] 
	I0914 17:05:30.740172   27433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 17:05:30.740248   27433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 17:05:30.740313   27433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 17:05:30.740322   27433 kubeadm.go:310] 
	I0914 17:05:30.740400   27433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 17:05:30.740414   27433 kubeadm.go:310] 
	I0914 17:05:30.740485   27433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 17:05:30.740499   27433 kubeadm.go:310] 
	I0914 17:05:30.740586   27433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 17:05:30.740708   27433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 17:05:30.740812   27433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 17:05:30.740820   27433 kubeadm.go:310] 
	I0914 17:05:30.740920   27433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 17:05:30.741030   27433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 17:05:30.741040   27433 kubeadm.go:310] 
	I0914 17:05:30.741163   27433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kz9zjv.9vz6qx71da3375jr \
	I0914 17:05:30.741331   27433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 17:05:30.741381   27433 kubeadm.go:310] 	--control-plane 
	I0914 17:05:30.741391   27433 kubeadm.go:310] 
	I0914 17:05:30.741480   27433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 17:05:30.741490   27433 kubeadm.go:310] 
	I0914 17:05:30.741610   27433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kz9zjv.9vz6qx71da3375jr \
	I0914 17:05:30.741767   27433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 17:05:30.742153   27433 kubeadm.go:310] W0914 17:05:20.130501     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 17:05:30.742508   27433 kubeadm.go:310] W0914 17:05:20.131686     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 17:05:30.742657   27433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 17:05:30.742707   27433 cni.go:84] Creating CNI manager for ""
	I0914 17:05:30.742723   27433 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 17:05:30.744429   27433 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 17:05:30.745679   27433 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 17:05:30.751964   27433 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 17:05:30.751988   27433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 17:05:30.770060   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 17:05:31.164465   27433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 17:05:31.164521   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:31.164615   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-929592 minikube.k8s.io/updated_at=2024_09_14T17_05_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=ha-929592 minikube.k8s.io/primary=true
	I0914 17:05:31.316903   27433 ops.go:34] apiserver oom_adj: -16
	I0914 17:05:31.320198   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:31.820892   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:32.321061   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:32.821075   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:33.321063   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:33.821156   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:34.320520   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:34.439647   27433 kubeadm.go:1113] duration metric: took 3.27517461s to wait for elevateKubeSystemPrivileges
	I0914 17:05:34.439682   27433 kubeadm.go:394] duration metric: took 14.526605759s to StartCluster
	I0914 17:05:34.439701   27433 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:34.439783   27433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:05:34.440673   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:34.440870   27433 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:05:34.440890   27433 start.go:241] waiting for startup goroutines ...
	I0914 17:05:34.440898   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 17:05:34.440903   27433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 17:05:34.440974   27433 addons.go:69] Setting storage-provisioner=true in profile "ha-929592"
	I0914 17:05:34.440989   27433 addons.go:234] Setting addon storage-provisioner=true in "ha-929592"
	I0914 17:05:34.440994   27433 addons.go:69] Setting default-storageclass=true in profile "ha-929592"
	I0914 17:05:34.441011   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:05:34.441013   27433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-929592"
	I0914 17:05:34.441090   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:05:34.441463   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.441470   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.441510   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.441513   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.457224   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0914 17:05:34.457313   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0914 17:05:34.457897   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.457910   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.458408   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.458429   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.458552   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.458575   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.458783   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.458907   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.459076   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:34.459303   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.459339   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.461237   27433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:05:34.461492   27433 kapi.go:59] client config for ha-929592: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 17:05:34.461940   27433 cert_rotation.go:140] Starting client certificate rotation controller
	I0914 17:05:34.462181   27433 addons.go:234] Setting addon default-storageclass=true in "ha-929592"
	I0914 17:05:34.462219   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:05:34.462500   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.462531   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.475058   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I0914 17:05:34.475673   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.476165   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.476191   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.476576   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.476759   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:34.477824   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0914 17:05:34.478369   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.478505   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:34.479023   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.479047   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.479367   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.479964   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.480011   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.480339   27433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:05:34.481456   27433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:05:34.481468   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 17:05:34.481482   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:34.484719   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.485183   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:34.485211   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.485528   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:34.485758   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:34.485917   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:34.486074   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:34.496123   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0914 17:05:34.496519   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.497055   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.497093   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.497461   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.497647   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:34.499133   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:34.499313   27433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 17:05:34.499330   27433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 17:05:34.499348   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:34.502134   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.502557   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:34.502574   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.502826   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:34.502965   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:34.503093   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:34.503200   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:34.631749   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 17:05:34.652199   27433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:05:34.665217   27433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 17:05:35.192931   27433 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 17:05:35.482652   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.482678   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.482753   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.482773   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.482982   27433 main.go:141] libmachine: (ha-929592) DBG | Closing plugin on server side
	I0914 17:05:35.483014   27433 main.go:141] libmachine: (ha-929592) DBG | Closing plugin on server side
	I0914 17:05:35.483021   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483035   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483040   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483044   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.483048   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483051   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.483056   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.483062   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.483277   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483283   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483291   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483296   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483354   27433 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 17:05:35.483369   27433 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 17:05:35.483453   27433 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0914 17:05:35.483459   27433 round_trippers.go:469] Request Headers:
	I0914 17:05:35.483469   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:05:35.483475   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:05:35.500271   27433 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0914 17:05:35.501063   27433 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0914 17:05:35.501088   27433 round_trippers.go:469] Request Headers:
	I0914 17:05:35.501100   27433 round_trippers.go:473]     Content-Type: application/json
	I0914 17:05:35.501106   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:05:35.501110   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:05:35.503856   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:05:35.504029   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.504042   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.504335   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.504354   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.506137   27433 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 17:05:35.507243   27433 addons.go:510] duration metric: took 1.066342353s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0914 17:05:35.507275   27433 start.go:246] waiting for cluster config update ...
	I0914 17:05:35.507290   27433 start.go:255] writing updated cluster config ...
	I0914 17:05:35.508881   27433 out.go:201] 
	I0914 17:05:35.510437   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:05:35.510514   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:05:35.511986   27433 out.go:177] * Starting "ha-929592-m02" control-plane node in "ha-929592" cluster
	I0914 17:05:35.513065   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:05:35.513082   27433 cache.go:56] Caching tarball of preloaded images
	I0914 17:05:35.513171   27433 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:05:35.513187   27433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:05:35.513256   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:05:35.513422   27433 start.go:360] acquireMachinesLock for ha-929592-m02: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:05:35.513465   27433 start.go:364] duration metric: took 25.163µs to acquireMachinesLock for "ha-929592-m02"
	I0914 17:05:35.513486   27433 start.go:93] Provisioning new machine with config: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:05:35.513547   27433 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0914 17:05:35.515605   27433 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:05:35.515683   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:35.515725   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:35.530477   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0914 17:05:35.530959   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:35.531458   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:35.531487   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:35.531834   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:35.532065   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:05:35.532193   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:05:35.532395   27433 start.go:159] libmachine.API.Create for "ha-929592" (driver="kvm2")
	I0914 17:05:35.532430   27433 client.go:168] LocalClient.Create starting
	I0914 17:05:35.532464   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:05:35.532508   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:05:35.532527   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:05:35.532592   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:05:35.532623   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:05:35.532638   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:05:35.532664   27433 main.go:141] libmachine: Running pre-create checks...
	I0914 17:05:35.532676   27433 main.go:141] libmachine: (ha-929592-m02) Calling .PreCreateCheck
	I0914 17:05:35.532839   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetConfigRaw
	I0914 17:05:35.533284   27433 main.go:141] libmachine: Creating machine...
	I0914 17:05:35.533303   27433 main.go:141] libmachine: (ha-929592-m02) Calling .Create
	I0914 17:05:35.533445   27433 main.go:141] libmachine: (ha-929592-m02) Creating KVM machine...
	I0914 17:05:35.534813   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found existing default KVM network
	I0914 17:05:35.534987   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found existing private KVM network mk-ha-929592
	I0914 17:05:35.535101   27433 main.go:141] libmachine: (ha-929592-m02) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02 ...
	I0914 17:05:35.535124   27433 main.go:141] libmachine: (ha-929592-m02) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:05:35.535202   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.535089   27764 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:05:35.535308   27433 main.go:141] libmachine: (ha-929592-m02) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:05:35.773131   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.772998   27764 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa...
	I0914 17:05:35.915180   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.915050   27764 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/ha-929592-m02.rawdisk...
	I0914 17:05:35.915215   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Writing magic tar header
	I0914 17:05:35.915230   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Writing SSH key tar header
	I0914 17:05:35.915247   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.915202   27764 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02 ...
	I0914 17:05:35.915330   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02
	I0914 17:05:35.915359   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02 (perms=drwx------)
	I0914 17:05:35.915376   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:05:35.915391   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:05:35.915408   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:05:35.915418   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:05:35.915426   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:05:35.915435   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:05:35.915451   27433 main.go:141] libmachine: (ha-929592-m02) Creating domain...
	I0914 17:05:35.915462   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:05:35.915474   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:05:35.915485   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:05:35.915494   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:05:35.915502   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home
	I0914 17:05:35.915509   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Skipping /home - not owner
	I0914 17:05:35.916419   27433 main.go:141] libmachine: (ha-929592-m02) define libvirt domain using xml: 
	I0914 17:05:35.916437   27433 main.go:141] libmachine: (ha-929592-m02) <domain type='kvm'>
	I0914 17:05:35.916445   27433 main.go:141] libmachine: (ha-929592-m02)   <name>ha-929592-m02</name>
	I0914 17:05:35.916452   27433 main.go:141] libmachine: (ha-929592-m02)   <memory unit='MiB'>2200</memory>
	I0914 17:05:35.916460   27433 main.go:141] libmachine: (ha-929592-m02)   <vcpu>2</vcpu>
	I0914 17:05:35.916473   27433 main.go:141] libmachine: (ha-929592-m02)   <features>
	I0914 17:05:35.916483   27433 main.go:141] libmachine: (ha-929592-m02)     <acpi/>
	I0914 17:05:35.916494   27433 main.go:141] libmachine: (ha-929592-m02)     <apic/>
	I0914 17:05:35.916502   27433 main.go:141] libmachine: (ha-929592-m02)     <pae/>
	I0914 17:05:35.916510   27433 main.go:141] libmachine: (ha-929592-m02)     
	I0914 17:05:35.916518   27433 main.go:141] libmachine: (ha-929592-m02)   </features>
	I0914 17:05:35.916524   27433 main.go:141] libmachine: (ha-929592-m02)   <cpu mode='host-passthrough'>
	I0914 17:05:35.916529   27433 main.go:141] libmachine: (ha-929592-m02)   
	I0914 17:05:35.916536   27433 main.go:141] libmachine: (ha-929592-m02)   </cpu>
	I0914 17:05:35.916543   27433 main.go:141] libmachine: (ha-929592-m02)   <os>
	I0914 17:05:35.916550   27433 main.go:141] libmachine: (ha-929592-m02)     <type>hvm</type>
	I0914 17:05:35.916558   27433 main.go:141] libmachine: (ha-929592-m02)     <boot dev='cdrom'/>
	I0914 17:05:35.916567   27433 main.go:141] libmachine: (ha-929592-m02)     <boot dev='hd'/>
	I0914 17:05:35.916584   27433 main.go:141] libmachine: (ha-929592-m02)     <bootmenu enable='no'/>
	I0914 17:05:35.916596   27433 main.go:141] libmachine: (ha-929592-m02)   </os>
	I0914 17:05:35.916604   27433 main.go:141] libmachine: (ha-929592-m02)   <devices>
	I0914 17:05:35.916609   27433 main.go:141] libmachine: (ha-929592-m02)     <disk type='file' device='cdrom'>
	I0914 17:05:35.916617   27433 main.go:141] libmachine: (ha-929592-m02)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/boot2docker.iso'/>
	I0914 17:05:35.916626   27433 main.go:141] libmachine: (ha-929592-m02)       <target dev='hdc' bus='scsi'/>
	I0914 17:05:35.916631   27433 main.go:141] libmachine: (ha-929592-m02)       <readonly/>
	I0914 17:05:35.916635   27433 main.go:141] libmachine: (ha-929592-m02)     </disk>
	I0914 17:05:35.916640   27433 main.go:141] libmachine: (ha-929592-m02)     <disk type='file' device='disk'>
	I0914 17:05:35.916645   27433 main.go:141] libmachine: (ha-929592-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:05:35.916652   27433 main.go:141] libmachine: (ha-929592-m02)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/ha-929592-m02.rawdisk'/>
	I0914 17:05:35.916657   27433 main.go:141] libmachine: (ha-929592-m02)       <target dev='hda' bus='virtio'/>
	I0914 17:05:35.916661   27433 main.go:141] libmachine: (ha-929592-m02)     </disk>
	I0914 17:05:35.916666   27433 main.go:141] libmachine: (ha-929592-m02)     <interface type='network'>
	I0914 17:05:35.916671   27433 main.go:141] libmachine: (ha-929592-m02)       <source network='mk-ha-929592'/>
	I0914 17:05:35.916676   27433 main.go:141] libmachine: (ha-929592-m02)       <model type='virtio'/>
	I0914 17:05:35.916680   27433 main.go:141] libmachine: (ha-929592-m02)     </interface>
	I0914 17:05:35.916688   27433 main.go:141] libmachine: (ha-929592-m02)     <interface type='network'>
	I0914 17:05:35.916728   27433 main.go:141] libmachine: (ha-929592-m02)       <source network='default'/>
	I0914 17:05:35.916754   27433 main.go:141] libmachine: (ha-929592-m02)       <model type='virtio'/>
	I0914 17:05:35.916768   27433 main.go:141] libmachine: (ha-929592-m02)     </interface>
	I0914 17:05:35.916779   27433 main.go:141] libmachine: (ha-929592-m02)     <serial type='pty'>
	I0914 17:05:35.916793   27433 main.go:141] libmachine: (ha-929592-m02)       <target port='0'/>
	I0914 17:05:35.916803   27433 main.go:141] libmachine: (ha-929592-m02)     </serial>
	I0914 17:05:35.916814   27433 main.go:141] libmachine: (ha-929592-m02)     <console type='pty'>
	I0914 17:05:35.916825   27433 main.go:141] libmachine: (ha-929592-m02)       <target type='serial' port='0'/>
	I0914 17:05:35.916833   27433 main.go:141] libmachine: (ha-929592-m02)     </console>
	I0914 17:05:35.916846   27433 main.go:141] libmachine: (ha-929592-m02)     <rng model='virtio'>
	I0914 17:05:35.916858   27433 main.go:141] libmachine: (ha-929592-m02)       <backend model='random'>/dev/random</backend>
	I0914 17:05:35.916869   27433 main.go:141] libmachine: (ha-929592-m02)     </rng>
	I0914 17:05:35.916878   27433 main.go:141] libmachine: (ha-929592-m02)     
	I0914 17:05:35.916887   27433 main.go:141] libmachine: (ha-929592-m02)     
	I0914 17:05:35.916897   27433 main.go:141] libmachine: (ha-929592-m02)   </devices>
	I0914 17:05:35.916908   27433 main.go:141] libmachine: (ha-929592-m02) </domain>
	I0914 17:05:35.916921   27433 main.go:141] libmachine: (ha-929592-m02) 
	I0914 17:05:35.923775   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:f0:50:13 in network default
	I0914 17:05:35.924413   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:35.924431   27433 main.go:141] libmachine: (ha-929592-m02) Ensuring networks are active...
	I0914 17:05:35.925240   27433 main.go:141] libmachine: (ha-929592-m02) Ensuring network default is active
	I0914 17:05:35.925508   27433 main.go:141] libmachine: (ha-929592-m02) Ensuring network mk-ha-929592 is active
	I0914 17:05:35.925994   27433 main.go:141] libmachine: (ha-929592-m02) Getting domain xml...
	I0914 17:05:35.926731   27433 main.go:141] libmachine: (ha-929592-m02) Creating domain...
	I0914 17:05:37.161131   27433 main.go:141] libmachine: (ha-929592-m02) Waiting to get IP...
	I0914 17:05:37.161868   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:37.162235   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:37.162266   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:37.162221   27764 retry.go:31] will retry after 210.008934ms: waiting for machine to come up
	I0914 17:05:37.373575   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:37.374028   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:37.374056   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:37.373981   27764 retry.go:31] will retry after 387.717032ms: waiting for machine to come up
	I0914 17:05:37.763659   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:37.764117   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:37.764155   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:37.764041   27764 retry.go:31] will retry after 296.557307ms: waiting for machine to come up
	I0914 17:05:38.063231   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:38.063653   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:38.063682   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:38.063596   27764 retry.go:31] will retry after 575.323007ms: waiting for machine to come up
	I0914 17:05:38.640355   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:38.640798   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:38.640836   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:38.640752   27764 retry.go:31] will retry after 534.390905ms: waiting for machine to come up
	I0914 17:05:39.176461   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:39.176910   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:39.176993   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:39.176864   27764 retry.go:31] will retry after 701.303758ms: waiting for machine to come up
	I0914 17:05:39.879456   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:39.879939   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:39.879964   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:39.879880   27764 retry.go:31] will retry after 1.123994818s: waiting for machine to come up
	I0914 17:05:41.005662   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:41.005979   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:41.006009   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:41.005931   27764 retry.go:31] will retry after 1.069436048s: waiting for machine to come up
	I0914 17:05:42.077062   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:42.077364   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:42.077410   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:42.077345   27764 retry.go:31] will retry after 1.46285432s: waiting for machine to come up
	I0914 17:05:43.541612   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:43.542119   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:43.542142   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:43.542096   27764 retry.go:31] will retry after 2.129066139s: waiting for machine to come up
	I0914 17:05:45.672329   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:45.672756   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:45.672787   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:45.672709   27764 retry.go:31] will retry after 2.11667218s: waiting for machine to come up
	I0914 17:05:47.791959   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:47.792398   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:47.792421   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:47.792360   27764 retry.go:31] will retry after 3.267136095s: waiting for machine to come up
	I0914 17:05:51.061117   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:51.061619   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:51.061653   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:51.061567   27764 retry.go:31] will retry after 3.623977804s: waiting for machine to come up
	I0914 17:05:54.688326   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:54.688750   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:54.688779   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:54.688708   27764 retry.go:31] will retry after 4.926570221s: waiting for machine to come up
	I0914 17:05:59.619920   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.620387   27433 main.go:141] libmachine: (ha-929592-m02) Found IP for machine: 192.168.39.148
	I0914 17:05:59.620415   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has current primary IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.620428   27433 main.go:141] libmachine: (ha-929592-m02) Reserving static IP address...
	I0914 17:05:59.620759   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find host DHCP lease matching {name: "ha-929592-m02", mac: "52:54:00:23:9e:43", ip: "192.168.39.148"} in network mk-ha-929592
	I0914 17:05:59.692746   27433 main.go:141] libmachine: (ha-929592-m02) Reserved static IP address: 192.168.39.148
	I0914 17:05:59.692768   27433 main.go:141] libmachine: (ha-929592-m02) Waiting for SSH to be available...
	I0914 17:05:59.692778   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Getting to WaitForSSH function...
	I0914 17:05:59.695628   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.696183   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:9e:43}
	I0914 17:05:59.696213   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.696414   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Using SSH client type: external
	I0914 17:05:59.696512   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa (-rw-------)
	I0914 17:05:59.696582   27433 main.go:141] libmachine: (ha-929592-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:05:59.696602   27433 main.go:141] libmachine: (ha-929592-m02) DBG | About to run SSH command:
	I0914 17:05:59.696614   27433 main.go:141] libmachine: (ha-929592-m02) DBG | exit 0
	I0914 17:05:59.822260   27433 main.go:141] libmachine: (ha-929592-m02) DBG | SSH cmd err, output: <nil>: 
	I0914 17:05:59.822527   27433 main.go:141] libmachine: (ha-929592-m02) KVM machine creation complete!
	I0914 17:05:59.822904   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetConfigRaw
	I0914 17:05:59.823568   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:05:59.823762   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:05:59.823958   27433 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:05:59.823973   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:05:59.825060   27433 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:05:59.825083   27433 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:05:59.825094   27433 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:05:59.825104   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:05:59.827539   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.827896   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:05:59.827924   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.828060   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:05:59.828191   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.828313   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.828438   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:05:59.828607   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:59.828944   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:05:59.828962   27433 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:05:59.937315   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:05:59.937336   27433 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:05:59.937345   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:05:59.940018   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.940354   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:05:59.940376   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.940584   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:05:59.940793   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.940946   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.941095   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:05:59.941291   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:59.941455   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:05:59.941466   27433 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:06:00.051065   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:06:00.051190   27433 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:06:00.051205   27433 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:06:00.051218   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:06:00.051471   27433 buildroot.go:166] provisioning hostname "ha-929592-m02"
	I0914 17:06:00.051503   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:06:00.051704   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.054191   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.054504   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.054531   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.054677   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.054869   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.055049   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.055206   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.055386   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:00.055566   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:00.055579   27433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592-m02 && echo "ha-929592-m02" | sudo tee /etc/hostname
	I0914 17:06:00.175884   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592-m02
	
	I0914 17:06:00.175913   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.178888   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.179268   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.179305   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.179468   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.179633   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.179780   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.179900   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.180070   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:00.180271   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:00.180288   27433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:06:00.295528   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:06:00.295570   27433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:06:00.295592   27433 buildroot.go:174] setting up certificates
	I0914 17:06:00.295605   27433 provision.go:84] configureAuth start
	I0914 17:06:00.295614   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:06:00.295987   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:00.299234   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.299663   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.299696   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.299841   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.302288   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.302662   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.302693   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.302864   27433 provision.go:143] copyHostCerts
	I0914 17:06:00.302911   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:06:00.302950   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:06:00.302961   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:06:00.303093   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:06:00.303183   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:06:00.303209   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:06:00.303217   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:06:00.303242   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:06:00.303288   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:06:00.303306   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:06:00.303311   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:06:00.303332   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:06:00.303383   27433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592-m02 san=[127.0.0.1 192.168.39.148 ha-929592-m02 localhost minikube]
	I0914 17:06:00.538356   27433 provision.go:177] copyRemoteCerts
	I0914 17:06:00.538412   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:06:00.538434   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.540910   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.541329   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.541350   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.541555   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.541741   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.541914   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.542066   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:00.623831   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:06:00.623907   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:06:00.647803   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:06:00.647883   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 17:06:00.671875   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:06:00.671937   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:06:00.696316   27433 provision.go:87] duration metric: took 400.698997ms to configureAuth
	I0914 17:06:00.696347   27433 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:06:00.696612   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:00.696747   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.699617   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.699975   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.700001   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.700178   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.700332   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.700583   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.700744   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.700901   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:00.701096   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:00.701110   27433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:06:00.927452   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:06:00.927475   27433 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:06:00.927492   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetURL
	I0914 17:06:00.928693   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Using libvirt version 6000000
	I0914 17:06:00.931091   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.931467   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.931495   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.931675   27433 main.go:141] libmachine: Docker is up and running!
	I0914 17:06:00.931693   27433 main.go:141] libmachine: Reticulating splines...
	I0914 17:06:00.931704   27433 client.go:171] duration metric: took 25.39926256s to LocalClient.Create
	I0914 17:06:00.931728   27433 start.go:167] duration metric: took 25.399335014s to libmachine.API.Create "ha-929592"
	I0914 17:06:00.931739   27433 start.go:293] postStartSetup for "ha-929592-m02" (driver="kvm2")
	I0914 17:06:00.931753   27433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:06:00.931771   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:00.932001   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:06:00.932038   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.934290   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.934650   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.934671   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.934788   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.934945   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.935073   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.935173   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:01.020366   27433 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:06:01.024445   27433 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:06:01.024474   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:06:01.024535   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:06:01.024612   27433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:06:01.024621   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:06:01.024697   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:06:01.033524   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:06:01.055501   27433 start.go:296] duration metric: took 123.750654ms for postStartSetup
	I0914 17:06:01.055544   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetConfigRaw
	I0914 17:06:01.056168   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:01.058924   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.059289   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.059318   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.059556   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:06:01.059787   27433 start.go:128] duration metric: took 25.546229359s to createHost
	I0914 17:06:01.059820   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:01.062065   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.062470   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.062490   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.062604   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:01.062769   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.062908   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.063007   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:01.063136   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:01.063334   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:01.063346   27433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:06:01.170835   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726333561.132255588
	
	I0914 17:06:01.170865   27433 fix.go:216] guest clock: 1726333561.132255588
	I0914 17:06:01.170883   27433 fix.go:229] Guest: 2024-09-14 17:06:01.132255588 +0000 UTC Remote: 2024-09-14 17:06:01.059806988 +0000 UTC m=+68.731004663 (delta=72.4486ms)
	I0914 17:06:01.170908   27433 fix.go:200] guest clock delta is within tolerance: 72.4486ms
	I0914 17:06:01.170915   27433 start.go:83] releasing machines lock for "ha-929592-m02", held for 25.65743831s
	I0914 17:06:01.170947   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.171190   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:01.173690   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.174044   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.174086   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.176439   27433 out.go:177] * Found network options:
	I0914 17:06:01.177882   27433 out.go:177]   - NO_PROXY=192.168.39.54
	W0914 17:06:01.178995   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:06:01.179041   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.179577   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.179750   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.179818   27433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:06:01.179854   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	W0914 17:06:01.179902   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:06:01.179998   27433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:06:01.180020   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:01.182388   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.182620   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.182761   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.182784   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.182922   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:01.183042   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.183065   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.183100   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.183219   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:01.183286   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:01.183439   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.183446   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:01.183586   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:01.183708   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:01.424976   27433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:06:01.430825   27433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:06:01.430885   27433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:06:01.445943   27433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:06:01.445965   27433 start.go:495] detecting cgroup driver to use...
	I0914 17:06:01.446044   27433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:06:01.465516   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:06:01.481232   27433 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:06:01.481292   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:06:01.496727   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:06:01.510206   27433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:06:01.626699   27433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:06:01.778807   27433 docker.go:233] disabling docker service ...
	I0914 17:06:01.778872   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:06:01.792872   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:06:01.805145   27433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:06:01.954030   27433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:06:02.076503   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:06:02.090192   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:06:02.108104   27433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:06:02.108165   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.118586   27433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:06:02.118659   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.129037   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.139271   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.149307   27433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:06:02.160226   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.170053   27433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.186445   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.196545   27433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:06:02.205667   27433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:06:02.205727   27433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:06:02.218845   27433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:06:02.228051   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:06:02.335821   27433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:06:02.426353   27433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:06:02.426415   27433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:06:02.430922   27433 start.go:563] Will wait 60s for crictl version
	I0914 17:06:02.430986   27433 ssh_runner.go:195] Run: which crictl
	I0914 17:06:02.434438   27433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:06:02.473078   27433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:06:02.473163   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:06:02.505224   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:06:02.534429   27433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:06:02.535775   27433 out.go:177]   - env NO_PROXY=192.168.39.54
	I0914 17:06:02.536938   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:02.539641   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:02.539999   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:02.540031   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:02.540212   27433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:06:02.544021   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:06:02.556167   27433 mustload.go:65] Loading cluster: ha-929592
	I0914 17:06:02.556379   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:02.556641   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:02.556680   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:02.573001   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0914 17:06:02.573569   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:02.574085   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:02.574117   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:02.574551   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:02.574748   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:06:02.576363   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:06:02.576647   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:02.576690   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:02.591896   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0914 17:06:02.592362   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:02.592910   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:02.592930   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:02.593281   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:02.593447   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:06:02.593604   27433 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.148
	I0914 17:06:02.593619   27433 certs.go:194] generating shared ca certs ...
	I0914 17:06:02.593645   27433 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:06:02.593773   27433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:06:02.593810   27433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:06:02.593821   27433 certs.go:256] generating profile certs ...
	I0914 17:06:02.593889   27433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:06:02.593911   27433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9
	I0914 17:06:02.593924   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.148 192.168.39.254]
	I0914 17:06:02.674183   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9 ...
	I0914 17:06:02.674215   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9: {Name:mk7b0abf9bde6718910e40cf89b039fc62438027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:06:02.674380   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9 ...
	I0914 17:06:02.674392   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9: {Name:mkf46cb15e9565b29650076ca2280885cae50778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:06:02.674460   27433 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:06:02.674597   27433 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:06:02.674719   27433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:06:02.674735   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:06:02.674748   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:06:02.674762   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:06:02.674774   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:06:02.674787   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:06:02.674800   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:06:02.674811   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:06:02.674823   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:06:02.674877   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:06:02.674904   27433 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:06:02.674915   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:06:02.674942   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:06:02.674964   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:06:02.674984   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:06:02.675019   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:06:02.675052   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:06:02.675066   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:02.675078   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:06:02.675106   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:06:02.678197   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:02.678611   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:06:02.678637   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:02.678799   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:06:02.678987   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:06:02.679150   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:06:02.679293   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:06:02.754596   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0914 17:06:02.759290   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0914 17:06:02.769849   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0914 17:06:02.774219   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0914 17:06:02.784759   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0914 17:06:02.788750   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0914 17:06:02.799025   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0914 17:06:02.802760   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0914 17:06:02.812026   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0914 17:06:02.815883   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0914 17:06:02.825239   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0914 17:06:02.828987   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0914 17:06:02.839073   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:06:02.862561   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:06:02.885092   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:06:02.907879   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:06:02.931262   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0914 17:06:02.953838   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:06:02.977311   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:06:03.000261   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:06:03.022914   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:06:03.045556   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:06:03.072140   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:06:03.097354   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0914 17:06:03.113627   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0914 17:06:03.129914   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0914 17:06:03.145634   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0914 17:06:03.161520   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0914 17:06:03.177503   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0914 17:06:03.193586   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0914 17:06:03.210279   27433 ssh_runner.go:195] Run: openssl version
	I0914 17:06:03.215862   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:06:03.226494   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:06:03.230749   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:06:03.230811   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:06:03.236532   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:06:03.247348   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:06:03.258810   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:06:03.263294   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:06:03.263368   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:06:03.268900   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:06:03.279654   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:06:03.289942   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:03.294193   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:03.294243   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:03.299592   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:06:03.309907   27433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:06:03.314010   27433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:06:03.314056   27433 kubeadm.go:934] updating node {m02 192.168.39.148 8443 v1.31.1 crio true true} ...
	I0914 17:06:03.314182   27433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:06:03.314209   27433 kube-vip.go:115] generating kube-vip config ...
	I0914 17:06:03.314241   27433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:06:03.332773   27433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:06:03.332844   27433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:06:03.332892   27433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:06:03.346197   27433 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0914 17:06:03.346254   27433 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0914 17:06:03.361915   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0914 17:06:03.361949   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:06:03.362005   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:06:03.362034   27433 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0914 17:06:03.362057   27433 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0914 17:06:03.366263   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0914 17:06:03.366294   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0914 17:06:04.306352   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:06:04.306428   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:06:04.310986   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0914 17:06:04.311021   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0914 17:06:04.437086   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:06:04.472561   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:06:04.472652   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:06:04.481645   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0914 17:06:04.481689   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0914 17:06:04.894100   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0914 17:06:04.906172   27433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0914 17:06:04.923934   27433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:06:04.943429   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 17:06:04.960902   27433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:06:04.965096   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:06:04.977142   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:06:05.100919   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:06:05.118791   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:06:05.119235   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:05.119291   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:05.134754   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42231
	I0914 17:06:05.135388   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:05.135932   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:05.135953   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:05.136295   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:05.136514   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:06:05.136651   27433 start.go:317] joinCluster: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:06:05.136779   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 17:06:05.136798   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:06:05.140027   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:05.140431   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:06:05.140456   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:05.140610   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:06:05.140777   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:06:05.140973   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:06:05.141108   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:06:05.305267   27433 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:06:05.305343   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 69bgkx.t8gcp42bom698swe --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m02 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443"
	I0914 17:06:27.237304   27433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 69bgkx.t8gcp42bom698swe --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m02 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443": (21.931933299s)
	I0914 17:06:27.237345   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 17:06:27.810007   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-929592-m02 minikube.k8s.io/updated_at=2024_09_14T17_06_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=ha-929592 minikube.k8s.io/primary=false
	I0914 17:06:27.964976   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-929592-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0914 17:06:28.142890   27433 start.go:319] duration metric: took 23.006235295s to joinCluster
	I0914 17:06:28.142975   27433 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:06:28.143287   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:28.144710   27433 out.go:177] * Verifying Kubernetes components...
	I0914 17:06:28.145892   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:06:28.400701   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:06:28.443879   27433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:06:28.444188   27433 kapi.go:59] client config for ha-929592: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0914 17:06:28.444306   27433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0914 17:06:28.444625   27433 node_ready.go:35] waiting up to 6m0s for node "ha-929592-m02" to be "Ready" ...
	I0914 17:06:28.444789   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:28.444800   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:28.444813   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:28.444822   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:28.454874   27433 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0914 17:06:28.945857   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:28.945881   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:28.945889   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:28.945894   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:28.950053   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:29.444967   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:29.444987   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:29.444995   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:29.445000   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:29.448785   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:29.945744   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:29.945767   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:29.945774   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:29.945778   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:29.949007   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:30.445350   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:30.445391   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:30.445400   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:30.445405   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:30.448516   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:30.449150   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:30.944823   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:30.944842   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:30.944852   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:30.944856   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:30.948489   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:31.445403   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:31.445423   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:31.445430   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:31.445434   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:31.450120   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:31.945219   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:31.945252   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:31.945263   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:31.945269   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:31.948193   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:32.445454   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:32.445474   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:32.445485   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:32.445489   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:32.448956   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:32.449653   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:32.945507   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:32.945528   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:32.945536   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:32.945539   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:32.948974   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:33.445218   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:33.445259   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:33.445266   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:33.445270   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:33.448638   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:33.945669   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:33.945690   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:33.945699   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:33.945702   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:33.949250   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:34.445298   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:34.445336   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:34.445344   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:34.445349   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:34.448841   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:34.945131   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:34.945155   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:34.945163   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:34.945169   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:34.948811   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:34.949307   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:35.445126   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:35.445155   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:35.445167   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:35.445173   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:35.448787   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:35.945782   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:35.945808   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:35.945816   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:35.945820   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:35.949787   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:36.445729   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:36.445754   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:36.445762   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:36.445770   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:36.449051   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:36.945857   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:36.945889   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:36.945898   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:36.945902   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:36.949623   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:36.950179   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:37.445701   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:37.445724   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:37.445733   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:37.445737   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:37.449415   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:37.945822   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:37.945843   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:37.945851   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:37.945855   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:37.949294   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:38.445253   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:38.445277   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:38.445286   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:38.445292   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:38.448999   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:38.945059   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:38.945082   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:38.945090   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:38.945095   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:38.948829   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:39.444999   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:39.445021   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:39.445029   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:39.445033   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:39.448760   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:39.449370   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:39.945847   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:39.945871   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:39.945879   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:39.945883   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:39.949527   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:40.444905   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:40.444928   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:40.444935   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:40.444938   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:40.448294   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:40.945759   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:40.945782   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:40.945789   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:40.945794   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:40.949593   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:41.445825   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:41.445854   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:41.445865   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:41.445871   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:41.449510   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:41.449939   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:41.945333   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:41.945357   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:41.945369   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:41.945376   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:41.948965   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:42.445259   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:42.445281   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:42.445296   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:42.445300   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:42.448678   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:42.945096   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:42.945118   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:42.945126   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:42.945130   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:42.948381   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:43.445351   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:43.445373   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:43.445382   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:43.445385   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:43.449853   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:43.450410   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:43.944892   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:43.944915   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:43.944923   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:43.944927   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:43.948315   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:44.445368   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:44.445392   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:44.445400   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:44.445404   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:44.448455   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:44.945534   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:44.945557   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:44.945565   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:44.945569   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:44.949438   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.445324   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:45.445348   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.445356   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.445360   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.448989   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.945405   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:45.945431   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.945443   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.945453   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.952479   27433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 17:06:45.953028   27433 node_ready.go:49] node "ha-929592-m02" has status "Ready":"True"
	I0914 17:06:45.953060   27433 node_ready.go:38] duration metric: took 17.508397098s for node "ha-929592-m02" to be "Ready" ...
	I0914 17:06:45.953073   27433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:06:45.953195   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:45.953210   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.953222   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.953229   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.959166   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:06:45.966388   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.966505   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-66txm
	I0914 17:06:45.966516   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.966527   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.966534   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.970133   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.970846   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:45.970863   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.970871   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.970875   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.974296   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.974856   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.974879   27433 pod_ready.go:82] duration metric: took 8.463909ms for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.974890   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.974954   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-dpdz4
	I0914 17:06:45.974961   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.974969   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.974974   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.978204   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.978916   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:45.978937   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.978945   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.978949   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.982392   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.982929   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.982957   27433 pod_ready.go:82] duration metric: took 8.060115ms for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.982975   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.983054   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592
	I0914 17:06:45.983066   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.983076   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.983085   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.985873   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:45.986599   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:45.986616   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.986624   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.986627   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.989772   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.990261   27433 pod_ready.go:93] pod "etcd-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.990277   27433 pod_ready.go:82] duration metric: took 7.295414ms for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.990290   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.990343   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m02
	I0914 17:06:45.990350   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.990365   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.990372   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.993331   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:45.993937   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:45.993954   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.993962   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.993966   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.996680   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:45.997261   27433 pod_ready.go:93] pod "etcd-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.997278   27433 pod_ready.go:82] duration metric: took 6.982458ms for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.997291   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.145678   27433 request.go:632] Waited for 148.305068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:06:46.145735   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:06:46.145740   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.145747   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.145751   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.149090   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.346002   27433 request.go:632] Waited for 196.36158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:46.346068   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:46.346074   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.346081   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.346086   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.349259   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.349868   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:46.349892   27433 pod_ready.go:82] duration metric: took 352.59431ms for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.349905   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.545900   27433 request.go:632] Waited for 195.922909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:06:46.545976   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:06:46.545984   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.545991   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.545997   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.549133   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.746357   27433 request.go:632] Waited for 196.373892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:46.746413   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:46.746431   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.746439   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.746445   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.749770   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.750286   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:46.750330   27433 pod_ready.go:82] duration metric: took 400.417297ms for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.750343   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.946421   27433 request.go:632] Waited for 196.010926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:06:46.946536   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:06:46.946547   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.946558   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.946564   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.950460   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.146420   27433 request.go:632] Waited for 195.341813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.146484   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.146508   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.146521   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.146532   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.150451   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.150991   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:47.151009   27433 pod_ready.go:82] duration metric: took 400.660338ms for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.151018   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.346097   27433 request.go:632] Waited for 195.00805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:06:47.346151   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:06:47.346177   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.346188   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.346213   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.350098   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.546350   27433 request.go:632] Waited for 195.435197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:47.546414   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:47.546421   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.546430   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.546434   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.550244   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.550787   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:47.550809   27433 pod_ready.go:82] duration metric: took 399.783639ms for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.550822   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.745770   27433 request.go:632] Waited for 194.872367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:06:47.745867   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:06:47.745875   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.745886   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.745894   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.751396   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:06:47.946402   27433 request.go:632] Waited for 194.394241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.946466   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.946474   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.946483   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.946489   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.950180   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.950824   27433 pod_ready.go:93] pod "kube-proxy-6zqmd" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:47.950847   27433 pod_ready.go:82] duration metric: took 400.017562ms for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.950862   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.145816   27433 request.go:632] Waited for 194.86879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:06:48.145884   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:06:48.145892   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.145902   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.145909   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.149564   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:48.345823   27433 request.go:632] Waited for 195.354267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:48.345906   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:48.345915   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.345926   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.345934   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.349290   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:48.349859   27433 pod_ready.go:93] pod "kube-proxy-bcfkb" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:48.349882   27433 pod_ready.go:82] duration metric: took 399.010862ms for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.349895   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.545948   27433 request.go:632] Waited for 195.969543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:06:48.546065   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:06:48.546078   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.546096   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.546105   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.550543   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:48.745476   27433 request.go:632] Waited for 194.30038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:48.745563   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:48.745572   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.745587   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.745597   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.748682   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:48.749284   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:48.749319   27433 pod_ready.go:82] duration metric: took 399.412284ms for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.749333   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.946336   27433 request.go:632] Waited for 196.916046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:06:48.946388   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:06:48.946393   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.946401   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.946406   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.950272   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:49.146231   27433 request.go:632] Waited for 195.356604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:49.146295   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:49.146302   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.146313   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.146318   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.149605   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:49.150177   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:49.150197   27433 pod_ready.go:82] duration metric: took 400.852186ms for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:49.150210   27433 pod_ready.go:39] duration metric: took 3.197122081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:06:49.150234   27433 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:06:49.150301   27433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:06:49.168129   27433 api_server.go:72] duration metric: took 21.025118313s to wait for apiserver process to appear ...
	I0914 17:06:49.168155   27433 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:06:49.168188   27433 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0914 17:06:49.174137   27433 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0914 17:06:49.174234   27433 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0914 17:06:49.174243   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.174251   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.174256   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.175044   27433 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 17:06:49.175141   27433 api_server.go:141] control plane version: v1.31.1
	I0914 17:06:49.175162   27433 api_server.go:131] duration metric: took 6.99529ms to wait for apiserver health ...
	I0914 17:06:49.175174   27433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:06:49.345500   27433 request.go:632] Waited for 170.24343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.345594   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.345606   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.345618   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.345627   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.350636   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:49.356629   27433 system_pods.go:59] 17 kube-system pods found
	I0914 17:06:49.356665   27433 system_pods.go:61] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:06:49.356671   27433 system_pods.go:61] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:06:49.356675   27433 system_pods.go:61] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:06:49.356678   27433 system_pods.go:61] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:06:49.356682   27433 system_pods.go:61] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:06:49.356686   27433 system_pods.go:61] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:06:49.356689   27433 system_pods.go:61] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:06:49.356693   27433 system_pods.go:61] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:06:49.356696   27433 system_pods.go:61] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:06:49.356699   27433 system_pods.go:61] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:06:49.356702   27433 system_pods.go:61] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:06:49.356705   27433 system_pods.go:61] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:06:49.356709   27433 system_pods.go:61] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:06:49.356711   27433 system_pods.go:61] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:06:49.356714   27433 system_pods.go:61] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:06:49.356717   27433 system_pods.go:61] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:06:49.356720   27433 system_pods.go:61] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:06:49.356725   27433 system_pods.go:74] duration metric: took 181.542581ms to wait for pod list to return data ...
	I0914 17:06:49.356734   27433 default_sa.go:34] waiting for default service account to be created ...
	I0914 17:06:49.546151   27433 request.go:632] Waited for 189.322413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:06:49.546248   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:06:49.546257   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.546271   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.546282   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.549850   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:49.550069   27433 default_sa.go:45] found service account: "default"
	I0914 17:06:49.550087   27433 default_sa.go:55] duration metric: took 193.346862ms for default service account to be created ...
	I0914 17:06:49.550098   27433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 17:06:49.745487   27433 request.go:632] Waited for 195.316949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.745564   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.745570   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.745577   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.745582   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.750700   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:06:49.755500   27433 system_pods.go:86] 17 kube-system pods found
	I0914 17:06:49.755544   27433 system_pods.go:89] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:06:49.755553   27433 system_pods.go:89] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:06:49.755560   27433 system_pods.go:89] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:06:49.755565   27433 system_pods.go:89] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:06:49.755570   27433 system_pods.go:89] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:06:49.755576   27433 system_pods.go:89] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:06:49.755583   27433 system_pods.go:89] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:06:49.755589   27433 system_pods.go:89] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:06:49.755595   27433 system_pods.go:89] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:06:49.755602   27433 system_pods.go:89] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:06:49.755608   27433 system_pods.go:89] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:06:49.755614   27433 system_pods.go:89] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:06:49.755623   27433 system_pods.go:89] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:06:49.755630   27433 system_pods.go:89] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:06:49.755635   27433 system_pods.go:89] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:06:49.755644   27433 system_pods.go:89] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:06:49.755652   27433 system_pods.go:89] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:06:49.755663   27433 system_pods.go:126] duration metric: took 205.557487ms to wait for k8s-apps to be running ...
	I0914 17:06:49.755684   27433 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 17:06:49.755743   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:06:49.776244   27433 system_svc.go:56] duration metric: took 20.525134ms WaitForService to wait for kubelet
	I0914 17:06:49.776289   27433 kubeadm.go:582] duration metric: took 21.633280125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:06:49.776315   27433 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:06:49.945798   27433 request.go:632] Waited for 169.394423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0914 17:06:49.945879   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0914 17:06:49.945887   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.945897   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.945905   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.950712   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:49.951592   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:06:49.951629   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:06:49.951650   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:06:49.951653   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:06:49.951658   27433 node_conditions.go:105] duration metric: took 175.335321ms to run NodePressure ...
	I0914 17:06:49.951669   27433 start.go:241] waiting for startup goroutines ...
	I0914 17:06:49.951696   27433 start.go:255] writing updated cluster config ...
	I0914 17:06:49.953949   27433 out.go:201] 
	I0914 17:06:49.955877   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:49.956002   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:06:49.957813   27433 out.go:177] * Starting "ha-929592-m03" control-plane node in "ha-929592" cluster
	I0914 17:06:49.959068   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:06:49.959099   27433 cache.go:56] Caching tarball of preloaded images
	I0914 17:06:49.959215   27433 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:06:49.959228   27433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:06:49.959357   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:06:49.959556   27433 start.go:360] acquireMachinesLock for ha-929592-m03: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:06:49.959616   27433 start.go:364] duration metric: took 37.328µs to acquireMachinesLock for "ha-929592-m03"
	I0914 17:06:49.959640   27433 start.go:93] Provisioning new machine with config: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:06:49.959751   27433 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0914 17:06:49.961439   27433 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:06:49.961570   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:49.961615   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:49.977719   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0914 17:06:49.978311   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:49.978858   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:49.978877   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:49.979166   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:49.979367   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:06:49.979530   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:06:49.979697   27433 start.go:159] libmachine.API.Create for "ha-929592" (driver="kvm2")
	I0914 17:06:49.979724   27433 client.go:168] LocalClient.Create starting
	I0914 17:06:49.979757   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:06:49.979794   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:06:49.979808   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:06:49.979856   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:06:49.979874   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:06:49.979897   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:06:49.979913   27433 main.go:141] libmachine: Running pre-create checks...
	I0914 17:06:49.979920   27433 main.go:141] libmachine: (ha-929592-m03) Calling .PreCreateCheck
	I0914 17:06:49.980055   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetConfigRaw
	I0914 17:06:49.980434   27433 main.go:141] libmachine: Creating machine...
	I0914 17:06:49.980448   27433 main.go:141] libmachine: (ha-929592-m03) Calling .Create
	I0914 17:06:49.980624   27433 main.go:141] libmachine: (ha-929592-m03) Creating KVM machine...
	I0914 17:06:49.982264   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found existing default KVM network
	I0914 17:06:49.982455   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found existing private KVM network mk-ha-929592
	I0914 17:06:49.982685   27433 main.go:141] libmachine: (ha-929592-m03) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03 ...
	I0914 17:06:49.982713   27433 main.go:141] libmachine: (ha-929592-m03) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:06:49.982795   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:49.982674   28182 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:06:49.982892   27433 main.go:141] libmachine: (ha-929592-m03) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:06:50.221371   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:50.221237   28182 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa...
	I0914 17:06:50.314576   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:50.314467   28182 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/ha-929592-m03.rawdisk...
	I0914 17:06:50.314603   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Writing magic tar header
	I0914 17:06:50.314615   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Writing SSH key tar header
	I0914 17:06:50.314623   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:50.314588   28182 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03 ...
	I0914 17:06:50.314739   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03
	I0914 17:06:50.314763   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03 (perms=drwx------)
	I0914 17:06:50.314777   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:06:50.314793   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:06:50.314811   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:06:50.314826   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:06:50.314888   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:06:50.314913   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:06:50.314923   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:06:50.314949   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:06:50.314970   27433 main.go:141] libmachine: (ha-929592-m03) Creating domain...
	I0914 17:06:50.314981   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:06:50.314998   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:06:50.315018   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home
	I0914 17:06:50.315033   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Skipping /home - not owner
	I0914 17:06:50.315929   27433 main.go:141] libmachine: (ha-929592-m03) define libvirt domain using xml: 
	I0914 17:06:50.315943   27433 main.go:141] libmachine: (ha-929592-m03) <domain type='kvm'>
	I0914 17:06:50.315952   27433 main.go:141] libmachine: (ha-929592-m03)   <name>ha-929592-m03</name>
	I0914 17:06:50.315959   27433 main.go:141] libmachine: (ha-929592-m03)   <memory unit='MiB'>2200</memory>
	I0914 17:06:50.315966   27433 main.go:141] libmachine: (ha-929592-m03)   <vcpu>2</vcpu>
	I0914 17:06:50.315972   27433 main.go:141] libmachine: (ha-929592-m03)   <features>
	I0914 17:06:50.315980   27433 main.go:141] libmachine: (ha-929592-m03)     <acpi/>
	I0914 17:06:50.315988   27433 main.go:141] libmachine: (ha-929592-m03)     <apic/>
	I0914 17:06:50.315999   27433 main.go:141] libmachine: (ha-929592-m03)     <pae/>
	I0914 17:06:50.316006   27433 main.go:141] libmachine: (ha-929592-m03)     
	I0914 17:06:50.316017   27433 main.go:141] libmachine: (ha-929592-m03)   </features>
	I0914 17:06:50.316033   27433 main.go:141] libmachine: (ha-929592-m03)   <cpu mode='host-passthrough'>
	I0914 17:06:50.316058   27433 main.go:141] libmachine: (ha-929592-m03)   
	I0914 17:06:50.316093   27433 main.go:141] libmachine: (ha-929592-m03)   </cpu>
	I0914 17:06:50.316102   27433 main.go:141] libmachine: (ha-929592-m03)   <os>
	I0914 17:06:50.316108   27433 main.go:141] libmachine: (ha-929592-m03)     <type>hvm</type>
	I0914 17:06:50.316115   27433 main.go:141] libmachine: (ha-929592-m03)     <boot dev='cdrom'/>
	I0914 17:06:50.316122   27433 main.go:141] libmachine: (ha-929592-m03)     <boot dev='hd'/>
	I0914 17:06:50.316131   27433 main.go:141] libmachine: (ha-929592-m03)     <bootmenu enable='no'/>
	I0914 17:06:50.316137   27433 main.go:141] libmachine: (ha-929592-m03)   </os>
	I0914 17:06:50.316145   27433 main.go:141] libmachine: (ha-929592-m03)   <devices>
	I0914 17:06:50.316152   27433 main.go:141] libmachine: (ha-929592-m03)     <disk type='file' device='cdrom'>
	I0914 17:06:50.316164   27433 main.go:141] libmachine: (ha-929592-m03)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/boot2docker.iso'/>
	I0914 17:06:50.316176   27433 main.go:141] libmachine: (ha-929592-m03)       <target dev='hdc' bus='scsi'/>
	I0914 17:06:50.316184   27433 main.go:141] libmachine: (ha-929592-m03)       <readonly/>
	I0914 17:06:50.316190   27433 main.go:141] libmachine: (ha-929592-m03)     </disk>
	I0914 17:06:50.316199   27433 main.go:141] libmachine: (ha-929592-m03)     <disk type='file' device='disk'>
	I0914 17:06:50.316208   27433 main.go:141] libmachine: (ha-929592-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:06:50.316219   27433 main.go:141] libmachine: (ha-929592-m03)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/ha-929592-m03.rawdisk'/>
	I0914 17:06:50.316227   27433 main.go:141] libmachine: (ha-929592-m03)       <target dev='hda' bus='virtio'/>
	I0914 17:06:50.316234   27433 main.go:141] libmachine: (ha-929592-m03)     </disk>
	I0914 17:06:50.316241   27433 main.go:141] libmachine: (ha-929592-m03)     <interface type='network'>
	I0914 17:06:50.316257   27433 main.go:141] libmachine: (ha-929592-m03)       <source network='mk-ha-929592'/>
	I0914 17:06:50.316268   27433 main.go:141] libmachine: (ha-929592-m03)       <model type='virtio'/>
	I0914 17:06:50.316281   27433 main.go:141] libmachine: (ha-929592-m03)     </interface>
	I0914 17:06:50.316293   27433 main.go:141] libmachine: (ha-929592-m03)     <interface type='network'>
	I0914 17:06:50.316301   27433 main.go:141] libmachine: (ha-929592-m03)       <source network='default'/>
	I0914 17:06:50.316311   27433 main.go:141] libmachine: (ha-929592-m03)       <model type='virtio'/>
	I0914 17:06:50.316319   27433 main.go:141] libmachine: (ha-929592-m03)     </interface>
	I0914 17:06:50.316326   27433 main.go:141] libmachine: (ha-929592-m03)     <serial type='pty'>
	I0914 17:06:50.316334   27433 main.go:141] libmachine: (ha-929592-m03)       <target port='0'/>
	I0914 17:06:50.316340   27433 main.go:141] libmachine: (ha-929592-m03)     </serial>
	I0914 17:06:50.316349   27433 main.go:141] libmachine: (ha-929592-m03)     <console type='pty'>
	I0914 17:06:50.316356   27433 main.go:141] libmachine: (ha-929592-m03)       <target type='serial' port='0'/>
	I0914 17:06:50.316364   27433 main.go:141] libmachine: (ha-929592-m03)     </console>
	I0914 17:06:50.316373   27433 main.go:141] libmachine: (ha-929592-m03)     <rng model='virtio'>
	I0914 17:06:50.316394   27433 main.go:141] libmachine: (ha-929592-m03)       <backend model='random'>/dev/random</backend>
	I0914 17:06:50.316406   27433 main.go:141] libmachine: (ha-929592-m03)     </rng>
	I0914 17:06:50.316414   27433 main.go:141] libmachine: (ha-929592-m03)     
	I0914 17:06:50.316419   27433 main.go:141] libmachine: (ha-929592-m03)     
	I0914 17:06:50.316427   27433 main.go:141] libmachine: (ha-929592-m03)   </devices>
	I0914 17:06:50.316433   27433 main.go:141] libmachine: (ha-929592-m03) </domain>
	I0914 17:06:50.316443   27433 main.go:141] libmachine: (ha-929592-m03) 
	I0914 17:06:50.323266   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:e5:cc:6e in network default
	I0914 17:06:50.323896   27433 main.go:141] libmachine: (ha-929592-m03) Ensuring networks are active...
	I0914 17:06:50.323918   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:50.324700   27433 main.go:141] libmachine: (ha-929592-m03) Ensuring network default is active
	I0914 17:06:50.324980   27433 main.go:141] libmachine: (ha-929592-m03) Ensuring network mk-ha-929592 is active
	I0914 17:06:50.325386   27433 main.go:141] libmachine: (ha-929592-m03) Getting domain xml...
	I0914 17:06:50.326282   27433 main.go:141] libmachine: (ha-929592-m03) Creating domain...
	I0914 17:06:51.593541   27433 main.go:141] libmachine: (ha-929592-m03) Waiting to get IP...
	I0914 17:06:51.594409   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:51.594884   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:51.594904   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:51.594870   28182 retry.go:31] will retry after 200.838126ms: waiting for machine to come up
	I0914 17:06:51.797364   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:51.798009   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:51.798034   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:51.797969   28182 retry.go:31] will retry after 313.647709ms: waiting for machine to come up
	I0914 17:06:52.113496   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:52.113947   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:52.113966   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:52.113898   28182 retry.go:31] will retry after 439.40481ms: waiting for machine to come up
	I0914 17:06:52.554781   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:52.555216   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:52.555242   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:52.555170   28182 retry.go:31] will retry after 393.848614ms: waiting for machine to come up
	I0914 17:06:52.950598   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:52.951214   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:52.951231   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:52.951168   28182 retry.go:31] will retry after 639.308693ms: waiting for machine to come up
	I0914 17:06:53.592100   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:53.592559   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:53.592592   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:53.592518   28182 retry.go:31] will retry after 835.193764ms: waiting for machine to come up
	I0914 17:06:54.428935   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:54.429451   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:54.429475   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:54.429380   28182 retry.go:31] will retry after 964.193112ms: waiting for machine to come up
	I0914 17:06:55.395171   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:55.395685   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:55.395709   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:55.395634   28182 retry.go:31] will retry after 1.437960076s: waiting for machine to come up
	I0914 17:06:56.835169   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:56.835619   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:56.835641   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:56.835566   28182 retry.go:31] will retry after 1.133546596s: waiting for machine to come up
	I0914 17:06:57.970597   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:57.971032   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:57.971063   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:57.970987   28182 retry.go:31] will retry after 2.230904983s: waiting for machine to come up
	I0914 17:07:00.204031   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:00.204476   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:00.204520   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:00.204458   28182 retry.go:31] will retry after 2.124636032s: waiting for machine to come up
	I0914 17:07:02.331821   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:02.332427   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:02.332454   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:02.332384   28182 retry.go:31] will retry after 2.29694632s: waiting for machine to come up
	I0914 17:07:04.631296   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:04.631779   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:04.631806   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:04.631744   28182 retry.go:31] will retry after 3.91983763s: waiting for machine to come up
	I0914 17:07:08.555144   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:08.555537   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:08.555559   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:08.555505   28182 retry.go:31] will retry after 4.766828714s: waiting for machine to come up
	I0914 17:07:13.324664   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.325434   27433 main.go:141] libmachine: (ha-929592-m03) Found IP for machine: 192.168.39.39
	I0914 17:07:13.325460   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has current primary IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.325469   27433 main.go:141] libmachine: (ha-929592-m03) Reserving static IP address...
	I0914 17:07:13.325740   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find host DHCP lease matching {name: "ha-929592-m03", mac: "52:54:00:49:df:f1", ip: "192.168.39.39"} in network mk-ha-929592
	I0914 17:07:13.401574   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Getting to WaitForSSH function...
	I0914 17:07:13.401603   27433 main.go:141] libmachine: (ha-929592-m03) Reserved static IP address: 192.168.39.39
	I0914 17:07:13.401615   27433 main.go:141] libmachine: (ha-929592-m03) Waiting for SSH to be available...
	I0914 17:07:13.404445   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.404909   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.404940   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.405056   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Using SSH client type: external
	I0914 17:07:13.405094   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa (-rw-------)
	I0914 17:07:13.405147   27433 main.go:141] libmachine: (ha-929592-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:07:13.405170   27433 main.go:141] libmachine: (ha-929592-m03) DBG | About to run SSH command:
	I0914 17:07:13.405224   27433 main.go:141] libmachine: (ha-929592-m03) DBG | exit 0
	I0914 17:07:13.530202   27433 main.go:141] libmachine: (ha-929592-m03) DBG | SSH cmd err, output: <nil>: 
	I0914 17:07:13.530466   27433 main.go:141] libmachine: (ha-929592-m03) KVM machine creation complete!
	I0914 17:07:13.530781   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetConfigRaw
	I0914 17:07:13.531380   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:13.531612   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:13.531756   27433 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:07:13.531768   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:07:13.533021   27433 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:07:13.533034   27433 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:07:13.533040   27433 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:07:13.533045   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.535327   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.535730   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.535757   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.535889   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.536046   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.536188   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.536356   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.536501   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.536699   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.536709   27433 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:07:13.641272   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:07:13.641296   27433 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:07:13.641308   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.643788   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.644117   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.644149   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.644268   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.644457   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.644620   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.644732   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.645034   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.645191   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.645202   27433 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:07:13.750656   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:07:13.750730   27433 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:07:13.750740   27433 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:07:13.750748   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:07:13.750984   27433 buildroot.go:166] provisioning hostname "ha-929592-m03"
	I0914 17:07:13.751012   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:07:13.751184   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.754244   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.754720   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.754749   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.754907   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.755117   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.755296   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.755467   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.755674   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.755831   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.755843   27433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592-m03 && echo "ha-929592-m03" | sudo tee /etc/hostname
	I0914 17:07:13.876961   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592-m03
	
	I0914 17:07:13.876988   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.879711   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.880064   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.880084   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.880284   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.880457   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.880588   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.880672   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.880841   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.881036   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.881058   27433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:07:13.994801   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:07:13.994834   27433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:07:13.994853   27433 buildroot.go:174] setting up certificates
	I0914 17:07:13.994863   27433 provision.go:84] configureAuth start
	I0914 17:07:13.994872   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:07:13.995128   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:13.997466   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.997846   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.997878   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.998074   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.000477   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.000823   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.000849   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.001022   27433 provision.go:143] copyHostCerts
	I0914 17:07:14.001054   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:07:14.001086   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:07:14.001096   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:07:14.001164   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:07:14.001239   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:07:14.001257   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:07:14.001263   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:07:14.001286   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:07:14.001344   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:07:14.001361   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:07:14.001367   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:07:14.001388   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:07:14.001437   27433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592-m03 san=[127.0.0.1 192.168.39.39 ha-929592-m03 localhost minikube]
	I0914 17:07:14.186720   27433 provision.go:177] copyRemoteCerts
	I0914 17:07:14.186780   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:07:14.186804   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.189322   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.189635   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.189665   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.189807   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.190094   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.190290   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.190499   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:14.273407   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:07:14.273472   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:07:14.298629   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:07:14.298702   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 17:07:14.323719   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:07:14.323790   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:07:14.349008   27433 provision.go:87] duration metric: took 354.131771ms to configureAuth
	I0914 17:07:14.349042   27433 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:07:14.349265   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:07:14.349341   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.351884   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.352193   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.352228   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.352371   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.352615   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.352788   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.352934   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.353086   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:14.353238   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:14.353252   27433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:07:14.581057   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:07:14.581084   27433 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:07:14.581094   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetURL
	I0914 17:07:14.582388   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Using libvirt version 6000000
	I0914 17:07:14.585025   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.585421   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.585455   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.585617   27433 main.go:141] libmachine: Docker is up and running!
	I0914 17:07:14.585632   27433 main.go:141] libmachine: Reticulating splines...
	I0914 17:07:14.585640   27433 client.go:171] duration metric: took 24.605908814s to LocalClient.Create
	I0914 17:07:14.585666   27433 start.go:167] duration metric: took 24.605970622s to libmachine.API.Create "ha-929592"
	I0914 17:07:14.585677   27433 start.go:293] postStartSetup for "ha-929592-m03" (driver="kvm2")
	I0914 17:07:14.585692   27433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:07:14.585743   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.585965   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:07:14.585987   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.588146   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.588465   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.588487   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.588623   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.588789   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.589040   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.589255   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:14.672938   27433 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:07:14.677354   27433 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:07:14.677381   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:07:14.677450   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:07:14.677518   27433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:07:14.677527   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:07:14.677625   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:07:14.687459   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:07:14.714644   27433 start.go:296] duration metric: took 128.952663ms for postStartSetup
	I0914 17:07:14.714698   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetConfigRaw
	I0914 17:07:14.715290   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:14.718212   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.718594   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.718622   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.719033   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:07:14.719244   27433 start.go:128] duration metric: took 24.759482258s to createHost
	I0914 17:07:14.719273   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.721996   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.722410   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.722437   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.722588   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.722810   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.722949   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.723063   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.723268   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:14.723475   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:14.723490   27433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:07:14.830713   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726333634.808024922
	
	I0914 17:07:14.830732   27433 fix.go:216] guest clock: 1726333634.808024922
	I0914 17:07:14.830740   27433 fix.go:229] Guest: 2024-09-14 17:07:14.808024922 +0000 UTC Remote: 2024-09-14 17:07:14.719257775 +0000 UTC m=+142.390455536 (delta=88.767147ms)
	I0914 17:07:14.830754   27433 fix.go:200] guest clock delta is within tolerance: 88.767147ms
	I0914 17:07:14.830759   27433 start.go:83] releasing machines lock for "ha-929592-m03", held for 24.871132115s
	I0914 17:07:14.830776   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.831059   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:14.833686   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.834135   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.834181   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.836475   27433 out.go:177] * Found network options:
	I0914 17:07:14.837543   27433 out.go:177]   - NO_PROXY=192.168.39.54,192.168.39.148
	W0914 17:07:14.838926   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 17:07:14.838951   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:07:14.838967   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.839606   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.839788   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.839890   27433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:07:14.839932   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	W0914 17:07:14.840000   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 17:07:14.840039   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:07:14.840105   27433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:07:14.840131   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.842687   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.842834   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.843104   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.843135   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.843272   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.843373   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.843396   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.843439   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.843587   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.843632   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.843708   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.843750   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:14.843874   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.844012   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:15.088977   27433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:07:15.094790   27433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:07:15.094865   27433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:07:15.110819   27433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:07:15.110845   27433 start.go:495] detecting cgroup driver to use...
	I0914 17:07:15.110902   27433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:07:15.129575   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:07:15.144157   27433 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:07:15.144209   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:07:15.158840   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:07:15.172747   27433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:07:15.286758   27433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:07:15.433698   27433 docker.go:233] disabling docker service ...
	I0914 17:07:15.433766   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:07:15.448613   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:07:15.462147   27433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:07:15.599607   27433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:07:15.723635   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:07:15.738666   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:07:15.758494   27433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:07:15.758555   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.772003   27433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:07:15.772077   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.783795   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.795318   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.806340   27433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:07:15.816626   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.827989   27433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.844682   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.854673   27433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:07:15.864167   27433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:07:15.864218   27433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:07:15.878610   27433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:07:15.888865   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:07:15.996873   27433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:07:16.084308   27433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:07:16.084378   27433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:07:16.089222   27433 start.go:563] Will wait 60s for crictl version
	I0914 17:07:16.089276   27433 ssh_runner.go:195] Run: which crictl
	I0914 17:07:16.092822   27433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:07:16.128255   27433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:07:16.128362   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:07:16.156435   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:07:16.185307   27433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:07:16.186498   27433 out.go:177]   - env NO_PROXY=192.168.39.54
	I0914 17:07:16.187780   27433 out.go:177]   - env NO_PROXY=192.168.39.54,192.168.39.148
	I0914 17:07:16.189038   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:16.191764   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:16.192143   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:16.192166   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:16.192408   27433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:07:16.196706   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:07:16.209144   27433 mustload.go:65] Loading cluster: ha-929592
	I0914 17:07:16.209417   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:07:16.209682   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:07:16.209721   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:07:16.224831   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0914 17:07:16.225273   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:07:16.225816   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:07:16.225843   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:07:16.226138   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:07:16.226315   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:07:16.227704   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:07:16.228102   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:07:16.228146   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:07:16.242690   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36155
	I0914 17:07:16.243081   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:07:16.243552   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:07:16.243573   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:07:16.243935   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:07:16.244132   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:07:16.244309   27433 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.39
	I0914 17:07:16.244323   27433 certs.go:194] generating shared ca certs ...
	I0914 17:07:16.244339   27433 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:07:16.244469   27433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:07:16.244521   27433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:07:16.244533   27433 certs.go:256] generating profile certs ...
	I0914 17:07:16.244631   27433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:07:16.244662   27433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d
	I0914 17:07:16.244680   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.148 192.168.39.39 192.168.39.254]
	I0914 17:07:16.555188   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d ...
	I0914 17:07:16.555218   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d: {Name:mk293944dbe0571c4a4a3bd4d63886ec79fd8aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:07:16.555415   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d ...
	I0914 17:07:16.555435   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d: {Name:mkab68f22df16a01bf03af3d7236b02f34cdef65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:07:16.555543   27433 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:07:16.555702   27433 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:07:16.555858   27433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:07:16.555875   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:07:16.555893   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:07:16.555910   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:07:16.555930   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:07:16.555949   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:07:16.555968   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:07:16.555986   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:07:16.570279   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:07:16.570409   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:07:16.570460   27433 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:07:16.570473   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:07:16.570507   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:07:16.570540   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:07:16.570572   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:07:16.570629   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:07:16.570680   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:16.570702   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:07:16.570724   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:07:16.570772   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:07:16.573823   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:16.574264   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:07:16.574292   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:16.574464   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:07:16.574669   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:07:16.574848   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:07:16.574961   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:07:16.654584   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0914 17:07:16.660317   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0914 17:07:16.671440   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0914 17:07:16.677084   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0914 17:07:16.687544   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0914 17:07:16.691970   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0914 17:07:16.703302   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0914 17:07:16.707644   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0914 17:07:16.719098   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0914 17:07:16.723753   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0914 17:07:16.742558   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0914 17:07:16.746769   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0914 17:07:16.759625   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:07:16.787721   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:07:16.812656   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:07:16.835889   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:07:16.860258   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0914 17:07:16.884399   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 17:07:16.909899   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:07:16.934622   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:07:16.959438   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:07:16.982628   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:07:17.005524   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:07:17.031425   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0914 17:07:17.047634   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0914 17:07:17.064668   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0914 17:07:17.080829   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0914 17:07:17.097388   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0914 17:07:17.113555   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0914 17:07:17.131406   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0914 17:07:17.148831   27433 ssh_runner.go:195] Run: openssl version
	I0914 17:07:17.155139   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:07:17.166934   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:07:17.171390   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:07:17.171450   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:07:17.177195   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:07:17.187600   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:07:17.198704   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:17.203174   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:17.203227   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:17.208809   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:07:17.219464   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:07:17.230052   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:07:17.234895   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:07:17.234970   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:07:17.241057   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:07:17.253229   27433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:07:17.257647   27433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:07:17.257708   27433 kubeadm.go:934] updating node {m03 192.168.39.39 8443 v1.31.1 crio true true} ...
	I0914 17:07:17.257784   27433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:07:17.257809   27433 kube-vip.go:115] generating kube-vip config ...
	I0914 17:07:17.257843   27433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:07:17.274638   27433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:07:17.274697   27433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:07:17.274742   27433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:07:17.284442   27433 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0914 17:07:17.284516   27433 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0914 17:07:17.293975   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0914 17:07:17.294003   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:07:17.294035   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0914 17:07:17.294058   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:07:17.294061   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:07:17.294114   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:07:17.294034   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0914 17:07:17.294185   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:07:17.307956   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0914 17:07:17.307987   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:07:17.307990   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0914 17:07:17.308030   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0914 17:07:17.308057   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0914 17:07:17.308068   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:07:17.340653   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0914 17:07:17.340701   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0914 17:07:18.120396   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0914 17:07:18.130120   27433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0914 17:07:18.147144   27433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:07:18.163645   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 17:07:18.179930   27433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:07:18.183757   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:07:18.195632   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:07:18.309959   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:07:18.327594   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:07:18.327934   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:07:18.327995   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:07:18.344958   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39999
	I0914 17:07:18.345522   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:07:18.346106   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:07:18.346127   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:07:18.346507   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:07:18.346686   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:07:18.346847   27433 start.go:317] joinCluster: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:07:18.346974   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 17:07:18.346995   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:07:18.350241   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:18.350751   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:07:18.350781   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:18.350984   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:07:18.351165   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:07:18.351322   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:07:18.351493   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:07:18.506210   27433 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:07:18.506264   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u5ad97.nitviectgjwmq8kn --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m03 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443"
	I0914 17:07:41.528053   27433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u5ad97.nitviectgjwmq8kn --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m03 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443": (23.021765461s)
	I0914 17:07:41.528091   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 17:07:42.019670   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-929592-m03 minikube.k8s.io/updated_at=2024_09_14T17_07_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=ha-929592 minikube.k8s.io/primary=false
	I0914 17:07:42.171268   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-929592-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0914 17:07:42.295912   27433 start.go:319] duration metric: took 23.949060276s to joinCluster
	I0914 17:07:42.295986   27433 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:07:42.296305   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:07:42.297747   27433 out.go:177] * Verifying Kubernetes components...
	I0914 17:07:42.299464   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:07:42.487043   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:07:42.509749   27433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:07:42.510103   27433 kapi.go:59] client config for ha-929592: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0914 17:07:42.510224   27433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0914 17:07:42.510505   27433 node_ready.go:35] waiting up to 6m0s for node "ha-929592-m03" to be "Ready" ...
	I0914 17:07:42.510592   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:42.510603   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:42.510615   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:42.510623   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:42.514443   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:43.011413   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:43.011440   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:43.011450   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:43.011455   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:43.014989   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:43.511236   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:43.511266   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:43.511275   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:43.511279   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:43.514916   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:44.010785   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:44.010812   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:44.010823   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:44.010833   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:44.014331   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:44.511105   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:44.511126   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:44.511136   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:44.511141   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:44.515073   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:44.515807   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:45.011166   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:45.011189   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:45.011199   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:45.011205   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:45.014925   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:45.511405   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:45.511441   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:45.511453   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:45.511460   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:45.515149   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:46.011420   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:46.011446   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:46.011454   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:46.011458   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:46.016666   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:07:46.511346   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:46.511372   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:46.511384   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:46.511390   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:46.514823   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:47.010782   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:47.010803   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:47.010811   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:47.010815   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:47.014205   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:47.015167   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:47.511176   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:47.511204   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:47.511215   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:47.511220   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:47.514771   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:48.011464   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:48.011495   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:48.011508   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:48.011513   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:48.014851   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:48.510761   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:48.510781   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:48.510790   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:48.510798   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:48.514178   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:49.010982   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:49.011004   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:49.011012   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:49.011015   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:49.014046   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:49.510942   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:49.510965   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:49.510973   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:49.510977   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:49.514316   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:49.515138   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:50.011544   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:50.011568   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:50.011581   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:50.011586   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:50.015427   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:50.510672   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:50.510694   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:50.510702   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:50.510710   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:50.513629   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:51.011048   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:51.011070   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:51.011078   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:51.011084   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:51.014109   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:51.511653   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:51.511678   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:51.511689   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:51.511695   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:51.515229   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:51.515942   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:52.011425   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:52.011452   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:52.011464   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:52.011469   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:52.019846   27433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 17:07:52.510858   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:52.510880   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:52.510891   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:52.510898   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:52.514404   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:53.011440   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:53.011465   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:53.011477   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:53.011485   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:53.014917   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:53.511224   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:53.511245   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:53.511253   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:53.511257   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:53.514437   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:54.011402   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:54.011428   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:54.011440   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:54.011448   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:54.015375   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:54.015952   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:54.511426   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:54.511452   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:54.511463   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:54.511472   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:54.514757   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:55.011159   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:55.011198   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:55.011209   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:55.011214   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:55.015773   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:07:55.511126   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:55.511150   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:55.511157   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:55.511162   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:55.514253   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:56.011556   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:56.011580   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:56.011591   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:56.011597   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:56.014897   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:56.510753   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:56.510778   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:56.510788   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:56.510793   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:56.513948   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:56.514410   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:57.010683   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:57.010707   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:57.010717   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:57.010721   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:57.014048   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:57.511695   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:57.511717   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:57.511726   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:57.511731   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:57.515681   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:58.011422   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:58.011444   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:58.011452   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:58.011457   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:58.014905   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:58.511392   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:58.511414   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:58.511423   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:58.511431   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:58.514718   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:58.515272   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:59.010735   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.010761   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.010769   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.010772   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.014521   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:59.511489   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.511513   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.511523   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.511530   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.514753   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:59.515339   27433 node_ready.go:49] node "ha-929592-m03" has status "Ready":"True"
	I0914 17:07:59.515357   27433 node_ready.go:38] duration metric: took 17.004834009s for node "ha-929592-m03" to be "Ready" ...
	I0914 17:07:59.515365   27433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:07:59.515434   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:07:59.515444   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.515450   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.515455   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.522045   27433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 17:07:59.528668   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.528756   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-66txm
	I0914 17:07:59.528767   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.528774   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.528781   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.531693   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.532279   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:07:59.532294   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.532303   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.532308   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.534773   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.535256   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.535273   27433 pod_ready.go:82] duration metric: took 6.579112ms for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.535288   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.535372   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-dpdz4
	I0914 17:07:59.535382   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.535394   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.535404   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.537717   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.538650   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:07:59.538663   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.538673   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.538682   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.541062   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.541448   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.541466   27433 pod_ready.go:82] duration metric: took 6.151987ms for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.541478   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.541535   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592
	I0914 17:07:59.541545   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.541555   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.541564   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.544527   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.545638   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:07:59.545655   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.545665   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.545671   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.548376   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.548981   27433 pod_ready.go:93] pod "etcd-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.548996   27433 pod_ready.go:82] duration metric: took 7.512177ms for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.549005   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.549051   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m02
	I0914 17:07:59.549058   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.549065   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.549070   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.551588   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.552368   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:07:59.552383   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.552390   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.552394   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.554872   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.555856   27433 pod_ready.go:93] pod "etcd-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.555876   27433 pod_ready.go:82] duration metric: took 6.864629ms for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.555887   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.712256   27433 request.go:632] Waited for 156.310735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m03
	I0914 17:07:59.712343   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m03
	I0914 17:07:59.712353   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.712361   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.712365   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.715318   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.912436   27433 request.go:632] Waited for 196.378799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.912490   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.912496   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.912506   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.912516   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.915904   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:59.916335   27433 pod_ready.go:93] pod "etcd-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.916351   27433 pod_ready.go:82] duration metric: took 360.458353ms for pod "etcd-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.916366   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.111821   27433 request.go:632] Waited for 195.355844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:08:00.111900   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:08:00.111946   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.111962   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.111970   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.115605   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:00.311543   27433 request.go:632] Waited for 195.332136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:00.311595   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:00.311602   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.311610   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.311615   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.314945   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:00.315616   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:00.315636   27433 pod_ready.go:82] duration metric: took 399.261529ms for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.315645   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.511723   27433 request.go:632] Waited for 196.0201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:08:00.511801   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:08:00.511808   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.511816   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.511821   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.515903   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:08:00.711977   27433 request.go:632] Waited for 195.376236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:00.712065   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:00.712075   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.712086   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.712110   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.715693   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:00.716183   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:00.716205   27433 pod_ready.go:82] duration metric: took 400.553404ms for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.716214   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.912270   27433 request.go:632] Waited for 195.977695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m03
	I0914 17:08:00.912360   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m03
	I0914 17:08:00.912372   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.912384   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.912391   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.915823   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.111913   27433 request.go:632] Waited for 195.353778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:01.111967   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:01.111972   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.111980   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.111987   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.115411   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.115930   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:01.115948   27433 pod_ready.go:82] duration metric: took 399.728067ms for pod "kube-apiserver-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.115959   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.312018   27433 request.go:632] Waited for 196.000899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:08:01.312096   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:08:01.312102   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.312109   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.312118   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.315329   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.512459   27433 request.go:632] Waited for 196.354283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:01.512516   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:01.512523   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.512540   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.512551   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.515821   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.516343   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:01.516360   27433 pod_ready.go:82] duration metric: took 400.394788ms for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.516369   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.712409   27433 request.go:632] Waited for 195.9831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:08:01.712468   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:08:01.712473   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.712480   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.712494   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.715865   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.911801   27433 request.go:632] Waited for 195.22504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:01.911855   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:01.911860   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.911868   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.911872   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.914916   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.915735   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:01.915756   27433 pod_ready.go:82] duration metric: took 399.381165ms for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.915766   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.111729   27433 request.go:632] Waited for 195.905392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m03
	I0914 17:08:02.111808   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m03
	I0914 17:08:02.111813   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.111820   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.111825   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.115762   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.311712   27433 request.go:632] Waited for 195.305414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.311765   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.311771   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.311778   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.311782   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.315533   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.316362   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:02.316379   27433 pod_ready.go:82] duration metric: took 400.606521ms for pod "kube-controller-manager-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.316388   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-59tn8" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.512364   27433 request.go:632] Waited for 195.91592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59tn8
	I0914 17:08:02.512416   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59tn8
	I0914 17:08:02.512421   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.512432   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.512435   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.515841   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.712317   27433 request.go:632] Waited for 195.69444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.712371   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.712376   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.712387   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.712391   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.715600   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.716105   27433 pod_ready.go:93] pod "kube-proxy-59tn8" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:02.716120   27433 pod_ready.go:82] duration metric: took 399.72639ms for pod "kube-proxy-59tn8" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.716129   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.912231   27433 request.go:632] Waited for 196.029636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:08:02.912304   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:08:02.912312   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.912331   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.912340   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.915878   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.111910   27433 request.go:632] Waited for 195.368033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.111964   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.111970   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.111980   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.111986   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.115005   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:08:03.115607   27433 pod_ready.go:93] pod "kube-proxy-6zqmd" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:03.115625   27433 pod_ready.go:82] duration metric: took 399.488925ms for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.115638   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.311737   27433 request.go:632] Waited for 196.030438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:08:03.311790   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:08:03.311805   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.311829   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.311838   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.315138   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.512204   27433 request.go:632] Waited for 196.423291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:03.512312   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:03.512324   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.512334   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.512342   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.515939   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.516428   27433 pod_ready.go:93] pod "kube-proxy-bcfkb" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:03.516445   27433 pod_ready.go:82] duration metric: took 400.79981ms for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.516453   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.712532   27433 request.go:632] Waited for 196.016889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:08:03.712629   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:08:03.712640   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.712658   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.712681   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.715857   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.911726   27433 request.go:632] Waited for 195.299661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.911809   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.911815   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.911823   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.911826   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.915494   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.916421   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:03.916442   27433 pod_ready.go:82] duration metric: took 399.98128ms for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.916454   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.112507   27433 request.go:632] Waited for 195.977843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:08:04.112577   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:08:04.112583   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.112591   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.112595   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.116079   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.311999   27433 request.go:632] Waited for 195.359722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:04.312069   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:04.312075   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.312084   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.312092   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.315519   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.316009   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:04.316029   27433 pod_ready.go:82] duration metric: took 399.567246ms for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.316039   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.512295   27433 request.go:632] Waited for 196.193669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m03
	I0914 17:08:04.512364   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m03
	I0914 17:08:04.512370   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.512378   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.512382   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.515471   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.712510   27433 request.go:632] Waited for 196.379641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:04.712573   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:04.712578   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.712586   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.712590   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.715934   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.716488   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:04.716509   27433 pod_ready.go:82] duration metric: took 400.462713ms for pod "kube-scheduler-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.716525   27433 pod_ready.go:39] duration metric: took 5.201150381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:08:04.716544   27433 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:08:04.716616   27433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:08:04.733284   27433 api_server.go:72] duration metric: took 22.437250379s to wait for apiserver process to appear ...
	I0914 17:08:04.733311   27433 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:08:04.733349   27433 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0914 17:08:04.738026   27433 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0914 17:08:04.738103   27433 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0914 17:08:04.738113   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.738124   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.738134   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.739076   27433 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 17:08:04.739139   27433 api_server.go:141] control plane version: v1.31.1
	I0914 17:08:04.739154   27433 api_server.go:131] duration metric: took 5.836544ms to wait for apiserver health ...
	I0914 17:08:04.739161   27433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:08:04.911477   27433 request.go:632] Waited for 172.249655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:04.911556   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:04.911563   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.911571   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.911578   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.924316   27433 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0914 17:08:04.931607   27433 system_pods.go:59] 24 kube-system pods found
	I0914 17:08:04.931637   27433 system_pods.go:61] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:08:04.931643   27433 system_pods.go:61] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:08:04.931648   27433 system_pods.go:61] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:08:04.931651   27433 system_pods.go:61] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:08:04.931654   27433 system_pods.go:61] "etcd-ha-929592-m03" [2542afd7-8c6a-4c02-aa3e-915d68aae931] Running
	I0914 17:08:04.931657   27433 system_pods.go:61] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:08:04.931660   27433 system_pods.go:61] "kindnet-j7mjh" [8d1280e5-c9aa-4625-9dfc-14da09ba4849] Running
	I0914 17:08:04.931663   27433 system_pods.go:61] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:08:04.931666   27433 system_pods.go:61] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:08:04.931669   27433 system_pods.go:61] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:08:04.931672   27433 system_pods.go:61] "kube-apiserver-ha-929592-m03" [07b3480d-6b12-42c7-a18f-587f6b55ec3d] Running
	I0914 17:08:04.931676   27433 system_pods.go:61] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:08:04.931679   27433 system_pods.go:61] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:08:04.931682   27433 system_pods.go:61] "kube-controller-manager-ha-929592-m03" [e0390d32-83b3-473c-a451-ea8d75b17d27] Running
	I0914 17:08:04.931685   27433 system_pods.go:61] "kube-proxy-59tn8" [fcc0929a-58ed-4bd8-9e93-b14e6d49eeef] Running
	I0914 17:08:04.931687   27433 system_pods.go:61] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:08:04.931691   27433 system_pods.go:61] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:08:04.931693   27433 system_pods.go:61] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:08:04.931696   27433 system_pods.go:61] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:08:04.931699   27433 system_pods.go:61] "kube-scheduler-ha-929592-m03" [a27d6148-c5d7-487e-bf9d-4625d432957b] Running
	I0914 17:08:04.931702   27433 system_pods.go:61] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:08:04.931706   27433 system_pods.go:61] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:08:04.931709   27433 system_pods.go:61] "kube-vip-ha-929592-m03" [9a6742f3-75d2-4630-bf31-fabb4040c533] Running
	I0914 17:08:04.931712   27433 system_pods.go:61] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:08:04.931718   27433 system_pods.go:74] duration metric: took 192.548327ms to wait for pod list to return data ...
	I0914 17:08:04.931729   27433 default_sa.go:34] waiting for default service account to be created ...
	I0914 17:08:05.112535   27433 request.go:632] Waited for 180.737287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:08:05.112589   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:08:05.112594   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:05.112606   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:05.112610   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:05.116810   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:08:05.116919   27433 default_sa.go:45] found service account: "default"
	I0914 17:08:05.116932   27433 default_sa.go:55] duration metric: took 185.197585ms for default service account to be created ...
	I0914 17:08:05.116940   27433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 17:08:05.311806   27433 request.go:632] Waited for 194.786419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:05.311878   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:05.311886   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:05.311899   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:05.311906   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:05.317165   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:08:05.323927   27433 system_pods.go:86] 24 kube-system pods found
	I0914 17:08:05.323957   27433 system_pods.go:89] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:08:05.323963   27433 system_pods.go:89] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:08:05.323967   27433 system_pods.go:89] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:08:05.323971   27433 system_pods.go:89] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:08:05.323974   27433 system_pods.go:89] "etcd-ha-929592-m03" [2542afd7-8c6a-4c02-aa3e-915d68aae931] Running
	I0914 17:08:05.323979   27433 system_pods.go:89] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:08:05.323983   27433 system_pods.go:89] "kindnet-j7mjh" [8d1280e5-c9aa-4625-9dfc-14da09ba4849] Running
	I0914 17:08:05.323986   27433 system_pods.go:89] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:08:05.323990   27433 system_pods.go:89] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:08:05.323994   27433 system_pods.go:89] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:08:05.323997   27433 system_pods.go:89] "kube-apiserver-ha-929592-m03" [07b3480d-6b12-42c7-a18f-587f6b55ec3d] Running
	I0914 17:08:05.324001   27433 system_pods.go:89] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:08:05.324008   27433 system_pods.go:89] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:08:05.324011   27433 system_pods.go:89] "kube-controller-manager-ha-929592-m03" [e0390d32-83b3-473c-a451-ea8d75b17d27] Running
	I0914 17:08:05.324014   27433 system_pods.go:89] "kube-proxy-59tn8" [fcc0929a-58ed-4bd8-9e93-b14e6d49eeef] Running
	I0914 17:08:05.324018   27433 system_pods.go:89] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:08:05.324021   27433 system_pods.go:89] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:08:05.324027   27433 system_pods.go:89] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:08:05.324030   27433 system_pods.go:89] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:08:05.324036   27433 system_pods.go:89] "kube-scheduler-ha-929592-m03" [a27d6148-c5d7-487e-bf9d-4625d432957b] Running
	I0914 17:08:05.324039   27433 system_pods.go:89] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:08:05.324044   27433 system_pods.go:89] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:08:05.324048   27433 system_pods.go:89] "kube-vip-ha-929592-m03" [9a6742f3-75d2-4630-bf31-fabb4040c533] Running
	I0914 17:08:05.324054   27433 system_pods.go:89] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:08:05.324061   27433 system_pods.go:126] duration metric: took 207.11334ms to wait for k8s-apps to be running ...
	I0914 17:08:05.324070   27433 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 17:08:05.324112   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:08:05.339239   27433 system_svc.go:56] duration metric: took 15.157926ms WaitForService to wait for kubelet
	I0914 17:08:05.339272   27433 kubeadm.go:582] duration metric: took 23.0432452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:08:05.339289   27433 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:08:05.511638   27433 request.go:632] Waited for 172.263852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0914 17:08:05.511691   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0914 17:08:05.511696   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:05.511704   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:05.511707   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:05.515995   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:08:05.517005   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:08:05.517028   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:08:05.517037   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:08:05.517041   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:08:05.517045   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:08:05.517048   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:08:05.517052   27433 node_conditions.go:105] duration metric: took 177.759414ms to run NodePressure ...
	I0914 17:08:05.517064   27433 start.go:241] waiting for startup goroutines ...
	I0914 17:08:05.517085   27433 start.go:255] writing updated cluster config ...
	I0914 17:08:05.517375   27433 ssh_runner.go:195] Run: rm -f paused
	I0914 17:08:05.568912   27433 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 17:08:05.572001   27433 out.go:177] * Done! kubectl is now configured to use "ha-929592" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.191333724Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726333524035248285,StartedAt:1726333524144355357,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/95065ad67a4f1610671e72fcaed57954/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/95065ad67a4f1610671e72fcaed57954/containers/kube-scheduler/9315c5b8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-929592_95065ad67a4f1610671e72fcaed57954/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,C
puShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2b9af70a-7a7d-430c-bb7f-a3d3a5113c84 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.191731960Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=9587fd2d-8c9b-406d-8232-f5a75099f7f6 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.191852012Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726333523982514834,StartedAt:1726333524109931134,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d7c84dd075d4f7e4fd5febc189940f4e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d7c84dd075d4f7e4fd5febc189940f4e/containers/etcd/7276b490,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-929592_d7c84d
d075d4f7e4fd5febc189940f4e/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9587fd2d-8c9b-406d-8232-f5a75099f7f6 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.192185571Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,Verbose:false,}" file="otel-collector/interceptors.go:62" id=389c24fb-809b-4275-a261-78f3038db4cd name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.192281763Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726333523930821704,StartedAt:1726333524038328664,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a3520d0a4b75398d9e9e72bfdcfc4f4f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a3520d0a4b75398d9e9e72bfdcfc4f4f/containers/kube-apiserver/400a997b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/
minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-929592_a3520d0a4b75398d9e9e72bfdcfc4f4f/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=389c24fb-809b-4275-a261-78f3038db4cd name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.192635852Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=74e24492-6fe5-4e8c-9b1b-f0d24ee9a6f0 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.192725909Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726333523901428147,StartedAt:1726333524023909585,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/21e24f7df5d7099b0f0b2dba49446d51/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/21e24f7df5d7099b0f0b2dba49446d51/containers/kube-controller-manager/3d2186ab,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*
IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-ha-929592_21e24f7df5d7099b0f0b2dba49446d51/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*H
ugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=74e24492-6fe5-4e8c-9b1b-f0d24ee9a6f0 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.200610238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1aaa608a-418b-413c-8612-8de529169175 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.201029537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930201009794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1aaa608a-418b-413c-8612-8de529169175 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.207189603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41958deb-dd83-4c01-984e-90cb1abcd0c2 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.207257396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41958deb-dd83-4c01-984e-90cb1abcd0c2 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.209851204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5ebf87f-e957-4e62-b434-9918f8420685 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.211017590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930210989483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5ebf87f-e957-4e62-b434-9918f8420685 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.212677401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81b44918-09da-447c-b0f8-c6eba1e04d15 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.212759171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81b44918-09da-447c-b0f8-c6eba1e04d15 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.213063796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726333690210089103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fcec21062afa8fcdfb822dced5eca45ebd403ba221182e4abdd623f53635ca,PodSandboxId:a615bca1c01216b9cf3d06e083d8c0ceae410e28322104032143f15a7a94115c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726333546868880012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546846633325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546840035162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab
5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263335
35088260594,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726333534777560737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8,PodSandboxId:383d700a7d746f2e9f7ceb35686a4630128c8524969a84641cd1c16713902f43,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726333526970281307,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e150edac5dabfa6dae6d65966a1e0a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726333523910232398,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726333523925784581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,PodSandboxId:2cb1c0532ae95d9a90ad1f8b984fb95a8bdda3b4bb844295f285009d3d4636b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726333523808944878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,PodSandboxId:a5a14538e219ebcd5abb61a37ffc184fe8f53c4b08117618bfa5e2ec8c0d75a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726333523800169396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81b44918-09da-447c-b0f8-c6eba1e04d15 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.224482317Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=eaff37f8-ce98-4b7c-96be-97d3caf9a8fa name=/runtime.v1.RuntimeService/Status
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.224565578Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=eaff37f8-ce98-4b7c-96be-97d3caf9a8fa name=/runtime.v1.RuntimeService/Status
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.256468435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db9fc208-561f-4fe3-ad78-0c4dd2dce0ff name=/runtime.v1.RuntimeService/Version
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.256566701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db9fc208-561f-4fe3-ad78-0c4dd2dce0ff name=/runtime.v1.RuntimeService/Version
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.257926828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55bda9c9-38af-4fa8-adab-62a39fad3354 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.258503850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930258467303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55bda9c9-38af-4fa8-adab-62a39fad3354 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.259147265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d7bc854-d809-43b3-bd79-2d69cfed64dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.259218681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d7bc854-d809-43b3-bd79-2d69cfed64dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:12:10 ha-929592 crio[661]: time="2024-09-14 17:12:10.259470459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726333690210089103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fcec21062afa8fcdfb822dced5eca45ebd403ba221182e4abdd623f53635ca,PodSandboxId:a615bca1c01216b9cf3d06e083d8c0ceae410e28322104032143f15a7a94115c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726333546868880012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546846633325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546840035162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab
5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263335
35088260594,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726333534777560737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8,PodSandboxId:383d700a7d746f2e9f7ceb35686a4630128c8524969a84641cd1c16713902f43,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726333526970281307,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e150edac5dabfa6dae6d65966a1e0a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726333523910232398,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726333523925784581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,PodSandboxId:2cb1c0532ae95d9a90ad1f8b984fb95a8bdda3b4bb844295f285009d3d4636b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726333523808944878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,PodSandboxId:a5a14538e219ebcd5abb61a37ffc184fe8f53c4b08117618bfa5e2ec8c0d75a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726333523800169396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d7bc854-d809-43b3-bd79-2d69cfed64dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34c6ad67896f3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   e605a9e0100e5       busybox-7dff88458-49mwg
	b0fcec21062af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a615bca1c0121       storage-provisioner
	9eb824a3acd10       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   69d86428b72f0       coredns-7c65d6cfc9-dpdz4
	06ffbf30c8c13       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   9b615a9a43e59       coredns-7c65d6cfc9-66txm
	fd34a54170b25       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   fc9e9c48c04be       kindnet-fw757
	c1571fb1d1d1f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   de29821ef5ba3       kube-proxy-6zqmd
	7b409821346de       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   383d700a7d746       kube-vip-ha-929592
	ac425bd016fb1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   282b521b3dea8       etcd-ha-929592
	972f797d73554       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   dbb138fdd1472       kube-scheduler-ha-929592
	ab1e607cdf424       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   2cb1c0532ae95       kube-apiserver-ha-929592
	363e6bc276fd6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a5a14538e219e       kube-controller-manager-ha-929592
	
	
	==> coredns [06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f] <==
	[INFO] 10.244.1.2:56119 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000163563s
	[INFO] 10.244.1.2:55772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00176312s
	[INFO] 10.244.0.4:42918 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163348s
	[INFO] 10.244.0.4:42643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003969379s
	[INFO] 10.244.0.4:59436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003594097s
	[INFO] 10.244.0.4:42742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196447s
	[INFO] 10.244.2.2:34834 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000264331s
	[INFO] 10.244.2.2:59462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156407s
	[INFO] 10.244.2.2:42619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326596s
	[INFO] 10.244.2.2:44804 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179359s
	[INFO] 10.244.2.2:41911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132469s
	[INFO] 10.244.2.2:33102 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102993s
	[INFO] 10.244.1.2:55754 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139996s
	[INFO] 10.244.1.2:43056 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122452s
	[INFO] 10.244.1.2:48145 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077043s
	[INFO] 10.244.0.4:52337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165468s
	[INFO] 10.244.0.4:42536 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091889s
	[INFO] 10.244.0.4:44365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064388s
	[INFO] 10.244.2.2:55168 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124822s
	[INFO] 10.244.0.4:38549 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137185s
	[INFO] 10.244.0.4:50003 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132872s
	[INFO] 10.244.2.2:52393 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098256s
	[INFO] 10.244.2.2:57699 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088711s
	[INFO] 10.244.1.2:46863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018617s
	[INFO] 10.244.1.2:35487 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119162s
	
	
	==> coredns [9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17] <==
	[INFO] 10.244.0.4:51005 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187399s
	[INFO] 10.244.0.4:48604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001531016s
	[INFO] 10.244.0.4:52034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144239s
	[INFO] 10.244.0.4:59604 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010094s
	[INFO] 10.244.2.2:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134857s
	[INFO] 10.244.2.2:33999 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00156764s
	[INFO] 10.244.1.2:33236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120988s
	[INFO] 10.244.1.2:56330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001720435s
	[INFO] 10.244.1.2:55436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009185s
	[INFO] 10.244.1.2:57342 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009326s
	[INFO] 10.244.1.2:54076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109267s
	[INFO] 10.244.0.4:39214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088174s
	[INFO] 10.244.2.2:52535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132429s
	[INFO] 10.244.2.2:57308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131665s
	[INFO] 10.244.2.2:55789 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060892s
	[INFO] 10.244.1.2:51494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124082s
	[INFO] 10.244.1.2:52382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214777s
	[INFO] 10.244.1.2:43073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088643s
	[INFO] 10.244.1.2:44985 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084521s
	[INFO] 10.244.0.4:58067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132438s
	[INFO] 10.244.0.4:49916 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000488329s
	[INFO] 10.244.2.2:49651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189629s
	[INFO] 10.244.2.2:55778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106781s
	[INFO] 10.244.1.2:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160687s
	[INFO] 10.244.1.2:44082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162642s
	
	
	==> describe nodes <==
	Name:               ha-929592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_05_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:05:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:12:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-929592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5487ccf56549d9a2987da2958ebdfe
	  System UUID:                ca5487cc-f565-49d9-a298-7da2958ebdfe
	  Boot ID:                    b416a941-f6c5-4da6-ab3c-4ac7463bcedd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-49mwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 coredns-7c65d6cfc9-66txm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m36s
	  kube-system                 coredns-7c65d6cfc9-dpdz4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m36s
	  kube-system                 etcd-ha-929592                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m40s
	  kube-system                 kindnet-fw757                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m36s
	  kube-system                 kube-apiserver-ha-929592             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-controller-manager-ha-929592    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-proxy-6zqmd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-scheduler-ha-929592             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-vip-ha-929592                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m35s  kube-proxy       
	  Normal  Starting                 6m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m40s  kubelet          Node ha-929592 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s  kubelet          Node ha-929592 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s  kubelet          Node ha-929592 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m37s  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal  NodeReady                6m24s  kubelet          Node ha-929592 status is now: NodeReady
	  Normal  RegisteredNode           5m38s  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal  RegisteredNode           4m24s  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	
	
	Name:               ha-929592-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_06_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:06:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:09:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-929592-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba17c21a65b42848fb2de3d914ef47e
	  System UUID:                bba17c21-a65b-4284-8fb2-de3d914ef47e
	  Boot ID:                    a9008c31-c184-44c6-a236-ef722ef0e219
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kvmx7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 etcd-ha-929592-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m45s
	  kube-system                 kindnet-tnjsl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m46s
	  kube-system                 kube-apiserver-ha-929592-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-controller-manager-ha-929592-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-proxy-bcfkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-scheduler-ha-929592-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-vip-ha-929592-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m42s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m46s                  cidrAllocator    Node ha-929592-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m46s (x8 over 5m46s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m46s (x8 over 5m46s)  kubelet          Node ha-929592-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x7 over 5m46s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m42s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  NodeNotReady             99s                    node-controller  Node ha-929592-m02 status is now: NodeNotReady
	
	
	Name:               ha-929592-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_07_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:07:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:12:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-929592-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bbc24177e214149a9c82a3c54652b96
	  System UUID:                5bbc2417-7e21-4149-a9c8-2a3c54652b96
	  Boot ID:                    1443bf49-c348-4dcc-9582-d986b3eb4cd0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4gtfl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 etcd-ha-929592-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m31s
	  kube-system                 kindnet-j7mjh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m32s
	  kube-system                 kube-apiserver-ha-929592-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-ha-929592-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-59tn8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-scheduler-ha-929592-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-vip-ha-929592-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m29s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m32s                  cidrAllocator    Node ha-929592-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node ha-929592-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node ha-929592-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x7 over 4m32s)  kubelet          Node ha-929592-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	
	
	Name:               ha-929592-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_08_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:08:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:12:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:08:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:08:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:08:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:09:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-929592-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b38c12dc6ad945c88a69c031beae5593
	  System UUID:                b38c12dc-6ad9-45c8-8a69-c031beae5593
	  Boot ID:                    e7b0339d-a020-4a02-9bae-4dd87180fa45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x76g8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m29s
	  kube-system                 kube-proxy-l7g8d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m30s (x2 over 3m30s)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x2 over 3m30s)  kubelet          Node ha-929592-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x2 over 3m30s)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m29s                  cidrAllocator    Node ha-929592-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m29s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal  RegisteredNode           3m28s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal  RegisteredNode           3m27s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-929592-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep14 17:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051137] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036788] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep14 17:05] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891093] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.559623] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.846823] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.055031] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061916] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.180150] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.131339] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.280240] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.763196] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.977772] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.069092] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951305] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.081826] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.069011] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.752479] kauditd_printk_skb: 31 callbacks suppressed
	[Sep14 17:06] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a] <==
	{"level":"warn","ts":"2024-09-14T17:12:10.408534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.522659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.526624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.535470Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.543862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.551012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.554472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.562200Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.568560Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.576874Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.586349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.590180Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.593094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.593897Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5fb5e21af24b18aa","rtt":"902.889µs","error":"dial tcp 192.168.39.148:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-14T17:12:10.594014Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5fb5e21af24b18aa","rtt":"8.357395ms","error":"dial tcp 192.168.39.148:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-14T17:12:10.599254Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.603976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.608634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.608828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.616089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.620787Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.624037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.628762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.635968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:12:10.646629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:12:10 up 7 min,  0 users,  load average: 0.63, 0.41, 0.21
	Linux ha-929592 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931] <==
	I0914 17:11:36.124145       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:11:46.131737       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:11:46.131813       1 main.go:299] handling current node
	I0914 17:11:46.131827       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:11:46.131833       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:11:46.131953       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:11:46.131972       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:11:46.132026       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:11:46.132031       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:11:56.132981       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:11:56.133121       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:11:56.133279       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:11:56.133310       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:11:56.133378       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:11:56.133401       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:11:56.133490       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:11:56.133515       1 main.go:299] handling current node
	I0914 17:12:06.124694       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:12:06.124830       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:12:06.124978       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:12:06.125051       1 main.go:299] handling current node
	I0914 17:12:06.125100       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:12:06.125177       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:12:06.125264       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:12:06.125284       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00] <==
	I0914 17:05:28.283962       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:05:28.360959       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0914 17:05:28.367666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.54]
	I0914 17:05:28.368556       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 17:05:28.373343       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 17:05:28.680051       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 17:05:30.135829       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 17:05:30.156471       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 17:05:30.173207       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 17:05:34.177806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0914 17:05:34.384683       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0914 17:08:11.520363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52798: use of closed network connection
	E0914 17:08:11.694195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52806: use of closed network connection
	E0914 17:08:12.070905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52830: use of closed network connection
	E0914 17:08:12.256251       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52860: use of closed network connection
	E0914 17:08:12.443924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52876: use of closed network connection
	E0914 17:08:12.639918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52890: use of closed network connection
	E0914 17:08:12.824843       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52904: use of closed network connection
	E0914 17:08:13.006503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52908: use of closed network connection
	E0914 17:08:13.285252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52948: use of closed network connection
	E0914 17:08:13.460709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52966: use of closed network connection
	E0914 17:08:13.647974       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52986: use of closed network connection
	E0914 17:08:13.836122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53010: use of closed network connection
	E0914 17:08:14.208702       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53052: use of closed network connection
	W0914 17:09:58.389176       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39 192.168.39.54]
	
	
	==> kube-controller-manager [363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c] <==
	E0914 17:08:41.186992       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-929592-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-929592-m04"
	E0914 17:08:41.187053       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-929592-m04': failed to patch node CIDR: Node \"ha-929592-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0914 17:08:41.187117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.190381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.192704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.614378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.901831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:42.647216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:42.739995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:43.473186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:43.474308       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-929592-m04"
	I0914 17:08:43.516355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:51.290158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:11.731747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:29.404151       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-929592-m04"
	I0914 17:09:29.404271       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:29.419134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:31.811684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:10:31.840886       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-929592-m04"
	I0914 17:10:31.841208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:10:31.860016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:10:31.882928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.524786ms"
	I0914 17:10:31.883642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.211µs"
	I0914 17:10:33.526882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:10:37.164911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	
	
	==> kube-proxy [c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:05:35.310113       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:05:35.359217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0914 17:05:35.359342       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:05:35.435955       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:05:35.436015       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:05:35.436044       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:05:35.449663       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:05:35.452038       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:05:35.452091       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:05:35.454722       1 config.go:199] "Starting service config controller"
	I0914 17:05:35.455148       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:05:35.455408       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:05:35.455433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:05:35.456374       1 config.go:328] "Starting node config controller"
	I0914 17:05:35.456414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:05:35.556032       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:05:35.556124       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:05:35.556760       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb] <==
	I0914 17:08:41.224233       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lhrb9" node="ha-929592-m04"
	E0914 17:08:41.260975       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bkp56\": pod kindnet-bkp56 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bkp56" node="ha-929592-m04"
	E0914 17:08:41.261124       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 25f166f1-e3c8-47e5-808f-f7057f6dd633(kube-system/kindnet-bkp56) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bkp56"
	E0914 17:08:41.261165       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bkp56\": pod kindnet-bkp56 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kindnet-bkp56"
	I0914 17:08:41.261207       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bkp56" node="ha-929592-m04"
	E0914 17:08:41.270235       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-skw76\": pod kube-proxy-skw76 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-skw76" node="ha-929592-m04"
	E0914 17:08:41.270537       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c4480281-6939-4653-9697-9041a678e870(kube-system/kube-proxy-skw76) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-skw76"
	E0914 17:08:41.270636       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-skw76\": pod kube-proxy-skw76 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-skw76"
	I0914 17:08:41.270678       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-skw76" node="ha-929592-m04"
	E0914 17:08:42.972713       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-phnll\": pod kube-proxy-phnll is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-phnll" node="ha-929592-m04"
	E0914 17:08:42.972802       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-phnll\": pod kube-proxy-phnll is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-phnll"
	E0914 17:08:42.973360       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:42.977406       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ae77fbbd-0eba-4e1d-add0-d894e73795c1(kube-system/kube-proxy-ll6r9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ll6r9"
	E0914 17:08:42.977758       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-ll6r9"
	I0914 17:08:42.977890       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:44.830679       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lrzhr" node="ha-929592-m04"
	E0914 17:08:44.830996       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-lrzhr"
	E0914 17:08:44.831750       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 858b1075-344d-4b2d-baed-8eea46a2f708(kube-system/kube-proxy-thwhv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-thwhv"
	E0914 17:08:44.837157       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-thwhv"
	I0914 17:08:44.837232       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837022       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	E0914 17:08:44.839305       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bdb91643-a0e4-4162-aeb3-0d94749f04df(kube-system/kube-proxy-l7g8d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-l7g8d"
	E0914 17:08:44.839486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-l7g8d"
	I0914 17:08:44.839536       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	
	
	==> kubelet <==
	Sep 14 17:10:40 ha-929592 kubelet[1305]: E0914 17:10:40.179747    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333840179232003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:10:40 ha-929592 kubelet[1305]: E0914 17:10:40.180041    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333840179232003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:10:50 ha-929592 kubelet[1305]: E0914 17:10:50.182062    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333850181784991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:10:50 ha-929592 kubelet[1305]: E0914 17:10:50.182128    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333850181784991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:00 ha-929592 kubelet[1305]: E0914 17:11:00.183743    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333860183309032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:00 ha-929592 kubelet[1305]: E0914 17:11:00.184091    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333860183309032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:10 ha-929592 kubelet[1305]: E0914 17:11:10.186503    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333870185903827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:10 ha-929592 kubelet[1305]: E0914 17:11:10.186878    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333870185903827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:20 ha-929592 kubelet[1305]: E0914 17:11:20.188800    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333880188197769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:20 ha-929592 kubelet[1305]: E0914 17:11:20.189254    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333880188197769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:30 ha-929592 kubelet[1305]: E0914 17:11:30.083026    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:11:30 ha-929592 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:11:30 ha-929592 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:11:30 ha-929592 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:11:30 ha-929592 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:11:30 ha-929592 kubelet[1305]: E0914 17:11:30.190859    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333890190380736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:30 ha-929592 kubelet[1305]: E0914 17:11:30.191027    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333890190380736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:40 ha-929592 kubelet[1305]: E0914 17:11:40.193360    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333900192824251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:40 ha-929592 kubelet[1305]: E0914 17:11:40.193807    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333900192824251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:50 ha-929592 kubelet[1305]: E0914 17:11:50.196749    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333910195877198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:50 ha-929592 kubelet[1305]: E0914 17:11:50.197231    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333910195877198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:00 ha-929592 kubelet[1305]: E0914 17:12:00.199819    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333920199330262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:00 ha-929592 kubelet[1305]: E0914 17:12:00.199882    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333920199330262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:10 ha-929592 kubelet[1305]: E0914 17:12:10.201533    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930201009794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:10 ha-929592 kubelet[1305]: E0914 17:12:10.201610    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930201009794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-929592 -n ha-929592
helpers_test.go:261: (dbg) Run:  kubectl --context ha-929592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (3.191736095s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:15.180053   32298 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:15.180304   32298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:15.180313   32298 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:15.180318   32298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:15.180481   32298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:15.180635   32298 out.go:352] Setting JSON to false
	I0914 17:12:15.180664   32298 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:15.180766   32298 notify.go:220] Checking for updates...
	I0914 17:12:15.181092   32298 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:15.181110   32298 status.go:255] checking status of ha-929592 ...
	I0914 17:12:15.181578   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:15.181630   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:15.201521   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0914 17:12:15.202006   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:15.202641   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:15.202669   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:15.203085   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:15.203318   32298 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:15.205034   32298 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:15.205052   32298 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:15.205479   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:15.205528   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:15.222086   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0914 17:12:15.222546   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:15.222999   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:15.223018   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:15.223386   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:15.223597   32298 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:15.226561   32298 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:15.226976   32298 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:15.227009   32298 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:15.227122   32298 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:15.227416   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:15.227456   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:15.243060   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0914 17:12:15.243505   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:15.243972   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:15.243992   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:15.244373   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:15.244550   32298 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:15.244719   32298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:15.244755   32298 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:15.247588   32298 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:15.248081   32298 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:15.248104   32298 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:15.248299   32298 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:15.248527   32298 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:15.248698   32298 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:15.248870   32298 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:15.334406   32298 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:15.340427   32298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:15.355845   32298 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:15.355878   32298 api_server.go:166] Checking apiserver status ...
	I0914 17:12:15.355909   32298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:15.369716   32298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:15.379669   32298 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:15.379719   32298 ssh_runner.go:195] Run: ls
	I0914 17:12:15.383833   32298 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:15.387905   32298 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:15.387926   32298 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:15.387935   32298 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:15.387953   32298 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:15.388273   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:15.388309   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:15.403384   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0914 17:12:15.403800   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:15.404320   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:15.404344   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:15.404719   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:15.404937   32298 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:15.406539   32298 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:12:15.406556   32298 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:15.406956   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:15.407020   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:15.422631   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0914 17:12:15.423074   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:15.423626   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:15.423644   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:15.423905   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:15.424069   32298 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:12:15.427407   32298 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:15.427940   32298 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:15.427967   32298 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:15.428154   32298 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:15.428480   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:15.428523   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:15.444996   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0914 17:12:15.445367   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:15.445790   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:15.445810   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:15.446105   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:15.446300   32298 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:12:15.446511   32298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:15.446530   32298 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:12:15.449286   32298 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:15.449585   32298 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:15.449612   32298 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:15.449767   32298 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:12:15.449957   32298 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:12:15.450108   32298 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:12:15.450454   32298 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:17.986510   32298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:17.986622   32298 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:17.986658   32298 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:17.986671   32298 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:17.986702   32298 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:17.986714   32298 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:17.987028   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:17.987080   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.002085   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0914 17:12:18.002771   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.003236   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.003264   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.003585   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.003748   32298 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:18.005320   32298 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:18.005338   32298 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:18.005746   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.005791   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.020421   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0914 17:12:18.020892   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.021388   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.021408   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.021693   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.021887   32298 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:18.024372   32298 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:18.024691   32298 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:18.024710   32298 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:18.024807   32298 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:18.025130   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.025176   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.039830   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0914 17:12:18.040158   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.040579   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.040598   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.040907   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.041096   32298 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:18.041248   32298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:18.041267   32298 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:18.043781   32298 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:18.044174   32298 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:18.044187   32298 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:18.044327   32298 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:18.044475   32298 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:18.044618   32298 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:18.044753   32298 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:18.125376   32298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:18.143826   32298 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:18.143856   32298 api_server.go:166] Checking apiserver status ...
	I0914 17:12:18.143897   32298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:18.157740   32298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:18.167014   32298 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:18.167094   32298 ssh_runner.go:195] Run: ls
	I0914 17:12:18.171224   32298 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:18.175396   32298 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:18.175417   32298 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:18.175424   32298 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:18.175440   32298 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:18.175764   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.175800   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.191091   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
	I0914 17:12:18.191730   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.192251   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.192277   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.192634   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.192819   32298 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:18.194469   32298 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:18.194486   32298 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:18.194902   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.194947   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.210235   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0914 17:12:18.210711   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.211247   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.211269   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.211559   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.211752   32298 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:18.214257   32298 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:18.214617   32298 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:18.214643   32298 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:18.214731   32298 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:18.215027   32298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.215066   32298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.230048   32298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0914 17:12:18.230479   32298 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.231025   32298 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.231043   32298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.231349   32298 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.231570   32298 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:18.231766   32298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:18.231789   32298 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:18.234720   32298 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:18.235105   32298 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:18.235123   32298 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:18.235300   32298 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:18.235478   32298 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:18.235624   32298 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:18.235754   32298 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:18.313178   32298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:18.327036   32298 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (2.579216783s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:18.875168   32398 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:18.875308   32398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:18.875320   32398 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:18.875326   32398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:18.875537   32398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:18.875764   32398 out.go:352] Setting JSON to false
	I0914 17:12:18.875804   32398 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:18.875906   32398 notify.go:220] Checking for updates...
	I0914 17:12:18.876316   32398 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:18.876335   32398 status.go:255] checking status of ha-929592 ...
	I0914 17:12:18.876763   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.876828   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.896260   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
	I0914 17:12:18.896693   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.897293   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.897319   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.897695   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.897891   32398 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:18.899563   32398 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:18.899586   32398 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:18.899930   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.899968   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.915260   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I0914 17:12:18.915680   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.916129   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.916161   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.916483   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.916689   32398 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:18.919388   32398 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:18.919737   32398 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:18.919761   32398 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:18.919929   32398 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:18.920223   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:18.920287   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:18.934958   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0914 17:12:18.935416   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:18.935868   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:18.935890   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:18.936219   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:18.936438   32398 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:18.936652   32398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:18.936680   32398 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:18.939212   32398 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:18.939580   32398 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:18.939606   32398 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:18.939756   32398 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:18.939926   32398 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:18.940052   32398 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:18.940160   32398 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:19.026019   32398 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:19.033358   32398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:19.051405   32398 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:19.051446   32398 api_server.go:166] Checking apiserver status ...
	I0914 17:12:19.051480   32398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:19.066166   32398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:19.076616   32398 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:19.076675   32398 ssh_runner.go:195] Run: ls
	I0914 17:12:19.081336   32398 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:19.086086   32398 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:19.086116   32398 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:19.086128   32398 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:19.086153   32398 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:19.086495   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:19.086547   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:19.101883   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0914 17:12:19.102371   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:19.102841   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:19.102861   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:19.103144   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:19.103341   32398 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:19.104741   32398 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:12:19.104759   32398 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:19.105089   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:19.105132   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:19.120268   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0914 17:12:19.120730   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:19.121169   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:19.121191   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:19.121493   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:19.121650   32398 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:12:19.124103   32398 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:19.124570   32398 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:19.124602   32398 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:19.124768   32398 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:19.125163   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:19.125211   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:19.141893   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0914 17:12:19.142364   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:19.142863   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:19.142884   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:19.143191   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:19.143385   32398 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:12:19.143574   32398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:19.143594   32398 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:12:19.146595   32398 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:19.146999   32398 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:19.147026   32398 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:19.147165   32398 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:12:19.147336   32398 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:12:19.147462   32398 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:12:19.147594   32398 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:21.058518   32398 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:21.058625   32398 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:21.058644   32398 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:21.058653   32398 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:21.058677   32398 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:21.058699   32398 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:21.059104   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:21.059186   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:21.074838   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I0914 17:12:21.075396   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:21.075873   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:21.075894   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:21.076202   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:21.076372   32398 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:21.078256   32398 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:21.078274   32398 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:21.078622   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:21.078660   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:21.094386   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I0914 17:12:21.094966   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:21.095488   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:21.095508   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:21.095800   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:21.096061   32398 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:21.098547   32398 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:21.098952   32398 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:21.098986   32398 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:21.099113   32398 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:21.099592   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:21.099641   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:21.114276   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
	I0914 17:12:21.114664   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:21.115119   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:21.115142   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:21.115519   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:21.115727   32398 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:21.115905   32398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:21.115923   32398 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:21.119046   32398 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:21.119528   32398 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:21.119555   32398 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:21.119701   32398 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:21.119828   32398 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:21.119954   32398 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:21.120093   32398 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:21.201128   32398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:21.216929   32398 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:21.216960   32398 api_server.go:166] Checking apiserver status ...
	I0914 17:12:21.216992   32398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:21.232277   32398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:21.243188   32398 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:21.243256   32398 ssh_runner.go:195] Run: ls
	I0914 17:12:21.247709   32398 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:21.253969   32398 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:21.253993   32398 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:21.254002   32398 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:21.254020   32398 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:21.254363   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:21.254405   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:21.269932   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0914 17:12:21.270408   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:21.270866   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:21.270887   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:21.271190   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:21.271363   32398 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:21.272798   32398 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:21.272811   32398 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:21.273101   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:21.273145   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:21.288388   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0914 17:12:21.288842   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:21.289377   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:21.289393   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:21.289662   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:21.289838   32398 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:21.293002   32398 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:21.293658   32398 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:21.293692   32398 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:21.293834   32398 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:21.294180   32398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:21.294232   32398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:21.309748   32398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I0914 17:12:21.310255   32398 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:21.310741   32398 main.go:141] libmachine: Using API Version  1
	I0914 17:12:21.310764   32398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:21.311094   32398 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:21.311330   32398 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:21.311526   32398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:21.311547   32398 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:21.314092   32398 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:21.314486   32398 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:21.314508   32398 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:21.314638   32398 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:21.314784   32398 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:21.314928   32398 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:21.315028   32398 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:21.396983   32398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:21.413680   32398 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (4.095821123s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:23.654321   32498 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:23.654434   32498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:23.654442   32498 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:23.654447   32498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:23.654601   32498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:23.654751   32498 out.go:352] Setting JSON to false
	I0914 17:12:23.654780   32498 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:23.654828   32498 notify.go:220] Checking for updates...
	I0914 17:12:23.655285   32498 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:23.655309   32498 status.go:255] checking status of ha-929592 ...
	I0914 17:12:23.655740   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:23.655793   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:23.670855   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
	I0914 17:12:23.671326   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:23.671792   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:23.671811   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:23.672158   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:23.672361   32498 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:23.673884   32498 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:23.673900   32498 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:23.674196   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:23.674228   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:23.689443   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46127
	I0914 17:12:23.689921   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:23.690411   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:23.690431   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:23.690763   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:23.690926   32498 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:23.693731   32498 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:23.694178   32498 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:23.694203   32498 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:23.694361   32498 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:23.694650   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:23.694683   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:23.709710   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I0914 17:12:23.710081   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:23.710542   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:23.710565   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:23.710890   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:23.711061   32498 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:23.711253   32498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:23.711299   32498 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:23.714013   32498 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:23.714428   32498 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:23.714454   32498 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:23.714565   32498 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:23.714708   32498 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:23.714850   32498 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:23.714981   32498 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:23.797825   32498 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:23.804339   32498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:23.819717   32498 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:23.819762   32498 api_server.go:166] Checking apiserver status ...
	I0914 17:12:23.819814   32498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:23.834680   32498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:23.844847   32498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:23.844919   32498 ssh_runner.go:195] Run: ls
	I0914 17:12:23.849646   32498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:23.853737   32498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:23.853760   32498 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:23.853768   32498 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:23.853784   32498 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:23.854075   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:23.854108   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:23.868912   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0914 17:12:23.869326   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:23.869787   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:23.869807   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:23.870170   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:23.870359   32498 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:23.872128   32498 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:12:23.872145   32498 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:23.872451   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:23.872491   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:23.887494   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0914 17:12:23.887868   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:23.888356   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:23.888383   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:23.888690   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:23.888865   32498 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:12:23.891644   32498 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:23.892065   32498 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:23.892092   32498 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:23.892252   32498 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:23.892587   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:23.892635   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:23.907251   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0914 17:12:23.907815   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:23.908245   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:23.908267   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:23.908562   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:23.908718   32498 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:12:23.908863   32498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:23.908893   32498 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:12:23.911939   32498 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:23.912404   32498 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:23.912431   32498 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:23.912583   32498 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:12:23.912753   32498 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:12:23.912870   32498 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:12:23.912996   32498 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:24.134410   32498 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:24.134452   32498 retry.go:31] will retry after 159.16389ms: dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:27.362470   32498 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:27.362586   32498 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:27.362608   32498 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:27.362616   32498 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:27.362635   32498 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:27.362645   32498 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:27.362944   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:27.362991   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:27.378590   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0914 17:12:27.379105   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:27.379590   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:27.379614   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:27.379953   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:27.380160   32498 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:27.381777   32498 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:27.381801   32498 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:27.382108   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:27.382151   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:27.396631   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0914 17:12:27.397135   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:27.397673   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:27.397693   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:27.398013   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:27.398236   32498 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:27.401313   32498 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:27.401748   32498 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:27.401772   32498 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:27.401919   32498 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:27.402246   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:27.402284   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:27.417010   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0914 17:12:27.417439   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:27.417914   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:27.417943   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:27.418253   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:27.418444   32498 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:27.418644   32498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:27.418671   32498 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:27.421254   32498 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:27.421721   32498 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:27.421749   32498 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:27.421914   32498 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:27.422090   32498 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:27.422258   32498 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:27.422418   32498 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:27.505974   32498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:27.521865   32498 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:27.521894   32498 api_server.go:166] Checking apiserver status ...
	I0914 17:12:27.521926   32498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:27.536128   32498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:27.545396   32498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:27.545446   32498 ssh_runner.go:195] Run: ls
	I0914 17:12:27.549503   32498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:27.554451   32498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:27.554474   32498 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:27.554483   32498 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:27.554501   32498 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:27.554844   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:27.554880   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:27.570103   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36039
	I0914 17:12:27.570634   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:27.571135   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:27.571153   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:27.571496   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:27.571688   32498 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:27.573350   32498 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:27.573368   32498 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:27.573669   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:27.573706   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:27.588548   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0914 17:12:27.588956   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:27.589379   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:27.589401   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:27.589721   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:27.589901   32498 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:27.593098   32498 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:27.593571   32498 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:27.593604   32498 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:27.593769   32498 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:27.594195   32498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:27.594244   32498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:27.609737   32498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0914 17:12:27.610206   32498 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:27.610662   32498 main.go:141] libmachine: Using API Version  1
	I0914 17:12:27.610685   32498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:27.610957   32498 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:27.611158   32498 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:27.611338   32498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:27.611356   32498 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:27.614384   32498 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:27.614836   32498 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:27.614861   32498 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:27.615021   32498 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:27.615186   32498 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:27.615322   32498 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:27.615461   32498 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:27.692844   32498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:27.706941   32498 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (4.819857543s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:29.247047   32597 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:29.247302   32597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:29.247312   32597 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:29.247317   32597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:29.247511   32597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:29.247682   32597 out.go:352] Setting JSON to false
	I0914 17:12:29.247711   32597 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:29.247870   32597 notify.go:220] Checking for updates...
	I0914 17:12:29.248105   32597 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:29.248120   32597 status.go:255] checking status of ha-929592 ...
	I0914 17:12:29.248502   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:29.248557   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:29.266129   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0914 17:12:29.266591   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:29.267102   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:29.267127   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:29.267478   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:29.267660   32597 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:29.269397   32597 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:29.269415   32597 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:29.269685   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:29.269719   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:29.285232   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42841
	I0914 17:12:29.285665   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:29.286256   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:29.286284   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:29.286628   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:29.286820   32597 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:29.289494   32597 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:29.289938   32597 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:29.289977   32597 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:29.290179   32597 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:29.290516   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:29.290553   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:29.305399   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0914 17:12:29.305857   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:29.306332   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:29.306355   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:29.306716   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:29.306885   32597 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:29.307063   32597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:29.307082   32597 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:29.309842   32597 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:29.310223   32597 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:29.310255   32597 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:29.310361   32597 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:29.310504   32597 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:29.310642   32597 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:29.310779   32597 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:29.397402   32597 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:29.408253   32597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:29.425385   32597 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:29.425426   32597 api_server.go:166] Checking apiserver status ...
	I0914 17:12:29.425468   32597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:29.441652   32597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:29.453837   32597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:29.453891   32597 ssh_runner.go:195] Run: ls
	I0914 17:12:29.458418   32597 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:29.463177   32597 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:29.463207   32597 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:29.463219   32597 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:29.463239   32597 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:29.463546   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:29.463587   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:29.478373   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41899
	I0914 17:12:29.478824   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:29.479557   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:29.479577   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:29.479883   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:29.480060   32597 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:29.481704   32597 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:12:29.481720   32597 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:29.482027   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:29.482068   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:29.498535   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0914 17:12:29.498966   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:29.499439   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:29.499468   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:29.499793   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:29.499973   32597 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:12:29.503187   32597 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:29.503633   32597 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:29.503659   32597 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:29.503844   32597 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:29.504152   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:29.504188   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:29.519606   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I0914 17:12:29.519993   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:29.520375   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:29.520408   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:29.520758   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:29.520952   32597 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:12:29.521168   32597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:29.521189   32597 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:12:29.524053   32597 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:29.524450   32597 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:29.524478   32597 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:29.524570   32597 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:12:29.524726   32597 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:12:29.524837   32597 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:12:29.525022   32597 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:30.434347   32597 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:30.434388   32597 retry.go:31] will retry after 151.593772ms: dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:33.670430   32597 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:33.670544   32597 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:33.670570   32597 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:33.670579   32597 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:33.670608   32597 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:33.670622   32597 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:33.670967   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:33.671012   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:33.686000   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0914 17:12:33.686510   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:33.687080   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:33.687101   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:33.687396   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:33.687575   32597 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:33.689247   32597 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:33.689267   32597 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:33.689682   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:33.689729   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:33.704436   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42851
	I0914 17:12:33.704854   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:33.705293   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:33.705318   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:33.705645   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:33.705832   32597 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:33.708758   32597 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:33.709176   32597 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:33.709193   32597 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:33.709342   32597 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:33.709757   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:33.709804   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:33.724964   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0914 17:12:33.725440   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:33.725923   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:33.725939   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:33.726264   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:33.726457   32597 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:33.726647   32597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:33.726668   32597 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:33.729979   32597 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:33.730392   32597 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:33.730413   32597 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:33.730736   32597 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:33.730908   32597 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:33.731050   32597 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:33.731199   32597 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:33.822592   32597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:33.839540   32597 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:33.839569   32597 api_server.go:166] Checking apiserver status ...
	I0914 17:12:33.839609   32597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:33.853926   32597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:33.863352   32597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:33.863403   32597 ssh_runner.go:195] Run: ls
	I0914 17:12:33.867497   32597 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:33.873347   32597 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:33.873377   32597 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:33.873387   32597 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:33.873409   32597 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:33.873804   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:33.873848   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:33.889312   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0914 17:12:33.889751   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:33.890334   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:33.890362   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:33.890659   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:33.890826   32597 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:33.892436   32597 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:33.892453   32597 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:33.892814   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:33.892856   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:33.907401   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0914 17:12:33.907822   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:33.908285   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:33.908301   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:33.908618   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:33.908832   32597 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:33.911686   32597 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:33.912053   32597 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:33.912081   32597 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:33.912224   32597 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:33.912531   32597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:33.912582   32597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:33.927081   32597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0914 17:12:33.927586   32597 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:33.928077   32597 main.go:141] libmachine: Using API Version  1
	I0914 17:12:33.928096   32597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:33.928410   32597 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:33.928577   32597 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:33.928745   32597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:33.928761   32597 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:33.931560   32597 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:33.931957   32597 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:33.931982   32597 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:33.932159   32597 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:33.932341   32597 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:33.932521   32597 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:33.932670   32597 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:34.009256   32597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:34.024011   32597 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (3.732164001s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:38.256322   32713 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:38.256594   32713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:38.256603   32713 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:38.256607   32713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:38.256789   32713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:38.256960   32713 out.go:352] Setting JSON to false
	I0914 17:12:38.256990   32713 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:38.257042   32713 notify.go:220] Checking for updates...
	I0914 17:12:38.257381   32713 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:38.257401   32713 status.go:255] checking status of ha-929592 ...
	I0914 17:12:38.257937   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:38.258003   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:38.273372   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37889
	I0914 17:12:38.273918   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:38.274724   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:38.274757   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:38.275145   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:38.275350   32713 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:38.277261   32713 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:38.277280   32713 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:38.277551   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:38.277596   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:38.293037   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I0914 17:12:38.293494   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:38.294052   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:38.294073   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:38.294461   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:38.294652   32713 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:38.297307   32713 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:38.297750   32713 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:38.297775   32713 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:38.297910   32713 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:38.298347   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:38.298390   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:38.313549   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0914 17:12:38.314022   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:38.314556   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:38.314580   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:38.314857   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:38.315008   32713 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:38.315154   32713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:38.315171   32713 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:38.317950   32713 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:38.318378   32713 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:38.318401   32713 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:38.318545   32713 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:38.318697   32713 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:38.318821   32713 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:38.318929   32713 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:38.408506   32713 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:38.421081   32713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:38.439301   32713 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:38.439340   32713 api_server.go:166] Checking apiserver status ...
	I0914 17:12:38.439391   32713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:38.453763   32713 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:38.463130   32713 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:38.463195   32713 ssh_runner.go:195] Run: ls
	I0914 17:12:38.467769   32713 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:38.472566   32713 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:38.472590   32713 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:38.472601   32713 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:38.472621   32713 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:38.472922   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:38.472966   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:38.488355   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0914 17:12:38.488795   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:38.489276   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:38.489308   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:38.489661   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:38.489823   32713 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:38.491375   32713 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:12:38.491390   32713 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:38.491691   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:38.491733   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:38.506348   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0914 17:12:38.506780   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:38.507331   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:38.507357   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:38.507648   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:38.507802   32713 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:12:38.510481   32713 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:38.510895   32713 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:38.510925   32713 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:38.511052   32713 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:38.511344   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:38.511390   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:38.526604   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35829
	I0914 17:12:38.527029   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:38.527466   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:38.527490   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:38.527795   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:38.527965   32713 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:12:38.528155   32713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:38.528177   32713 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:12:38.531538   32713 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:38.532006   32713 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:38.532043   32713 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:38.532141   32713 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:12:38.532334   32713 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:12:38.532476   32713 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:12:38.532578   32713 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:41.602404   32713 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:41.602506   32713 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:41.602525   32713 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:41.602534   32713 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:41.602554   32713 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:41.602572   32713 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:41.603154   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:41.603211   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:41.617805   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0914 17:12:41.618225   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:41.618629   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:41.618647   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:41.618899   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:41.619096   32713 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:41.620573   32713 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:41.620589   32713 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:41.620932   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:41.620968   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:41.636034   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
	I0914 17:12:41.636433   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:41.636987   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:41.637005   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:41.637321   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:41.637494   32713 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:41.640135   32713 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:41.640527   32713 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:41.640566   32713 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:41.640705   32713 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:41.641010   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:41.641060   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:41.656185   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0914 17:12:41.656640   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:41.657067   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:41.657096   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:41.657415   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:41.657589   32713 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:41.657774   32713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:41.657799   32713 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:41.660551   32713 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:41.660992   32713 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:41.661020   32713 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:41.661166   32713 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:41.661320   32713 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:41.661467   32713 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:41.661606   32713 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:41.745918   32713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:41.760843   32713 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:41.760871   32713 api_server.go:166] Checking apiserver status ...
	I0914 17:12:41.760914   32713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:41.775788   32713 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:41.785077   32713 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:41.785134   32713 ssh_runner.go:195] Run: ls
	I0914 17:12:41.789063   32713 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:41.793328   32713 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:41.793345   32713 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:41.793353   32713 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:41.793367   32713 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:41.793641   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:41.793697   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:41.808228   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I0914 17:12:41.808653   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:41.809092   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:41.809118   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:41.809433   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:41.809603   32713 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:41.811194   32713 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:41.811208   32713 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:41.811492   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:41.811534   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:41.827609   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0914 17:12:41.827976   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:41.828480   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:41.828500   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:41.828804   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:41.828958   32713 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:41.831878   32713 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:41.832279   32713 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:41.832295   32713 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:41.832458   32713 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:41.832829   32713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:41.832879   32713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:41.847522   32713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0914 17:12:41.847953   32713 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:41.848416   32713 main.go:141] libmachine: Using API Version  1
	I0914 17:12:41.848440   32713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:41.848739   32713 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:41.848913   32713 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:41.849045   32713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:41.849063   32713 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:41.851838   32713 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:41.852222   32713 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:41.852241   32713 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:41.852374   32713 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:41.852525   32713 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:41.852721   32713 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:41.852873   32713 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:41.928880   32713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:41.942663   32713 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (3.716277064s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:45.538990   32814 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:45.539105   32814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:45.539114   32814 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:45.539118   32814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:45.539312   32814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:45.539473   32814 out.go:352] Setting JSON to false
	I0914 17:12:45.539502   32814 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:45.539557   32814 notify.go:220] Checking for updates...
	I0914 17:12:45.540062   32814 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:45.540083   32814 status.go:255] checking status of ha-929592 ...
	I0914 17:12:45.540518   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:45.540587   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:45.556204   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0914 17:12:45.556638   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:45.557280   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:45.557314   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:45.557623   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:45.557784   32814 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:45.559346   32814 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:45.559361   32814 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:45.559666   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:45.559700   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:45.575019   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0914 17:12:45.575472   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:45.575974   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:45.576007   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:45.576345   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:45.576528   32814 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:45.579278   32814 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:45.579693   32814 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:45.579719   32814 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:45.579841   32814 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:45.580141   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:45.580188   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:45.595256   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I0914 17:12:45.595830   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:45.596375   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:45.596401   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:45.596705   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:45.596903   32814 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:45.597075   32814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:45.597104   32814 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:45.599883   32814 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:45.600277   32814 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:45.600315   32814 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:45.600512   32814 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:45.600686   32814 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:45.600849   32814 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:45.601003   32814 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:45.686358   32814 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:45.692089   32814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:45.705307   32814 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:45.705343   32814 api_server.go:166] Checking apiserver status ...
	I0914 17:12:45.705375   32814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:45.718880   32814 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:45.728123   32814 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:45.728182   32814 ssh_runner.go:195] Run: ls
	I0914 17:12:45.732758   32814 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:45.738566   32814 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:45.738593   32814 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:45.738602   32814 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:45.738617   32814 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:45.738929   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:45.738963   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:45.753971   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I0914 17:12:45.754574   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:45.755178   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:45.755205   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:45.755572   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:45.755735   32814 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:45.757602   32814 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:12:45.757619   32814 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:45.757896   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:45.757937   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:45.773268   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 17:12:45.773721   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:45.774220   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:45.774245   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:45.774530   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:45.774705   32814 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:12:45.777451   32814 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:45.777936   32814 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:45.777971   32814 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:45.778085   32814 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:12:45.778528   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:45.778588   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:45.793923   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0914 17:12:45.794384   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:45.794807   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:45.794828   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:45.795197   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:45.795404   32814 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:12:45.795595   32814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:45.795614   32814 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:12:45.799006   32814 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:45.799503   32814 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:12:45.799527   32814 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:12:45.799639   32814 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:12:45.799816   32814 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:12:45.799949   32814 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:12:45.800050   32814 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	W0914 17:12:48.866414   32814 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.148:22: connect: no route to host
	W0914 17:12:48.866523   32814 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0914 17:12:48.866558   32814 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:48.866566   32814 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:12:48.866583   32814 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	I0914 17:12:48.866592   32814 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:48.866929   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:48.866978   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:48.882367   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0914 17:12:48.882857   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:48.883357   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:48.883378   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:48.883713   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:48.883880   32814 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:48.885666   32814 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:48.885683   32814 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:48.886012   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:48.886049   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:48.900647   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0914 17:12:48.901136   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:48.901671   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:48.901689   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:48.901987   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:48.902211   32814 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:48.905067   32814 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:48.905430   32814 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:48.905457   32814 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:48.905578   32814 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:48.905878   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:48.905913   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:48.920261   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0914 17:12:48.920686   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:48.921159   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:48.921180   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:48.921494   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:48.921661   32814 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:48.921854   32814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:48.921874   32814 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:48.924909   32814 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:48.925390   32814 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:48.925404   32814 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:48.925620   32814 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:48.925816   32814 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:48.926007   32814 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:48.926179   32814 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:49.005643   32814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:49.021400   32814 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:49.021430   32814 api_server.go:166] Checking apiserver status ...
	I0914 17:12:49.021470   32814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:49.035695   32814 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:49.044909   32814 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:49.044972   32814 ssh_runner.go:195] Run: ls
	I0914 17:12:49.048910   32814 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:49.057934   32814 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:49.057958   32814 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:49.057966   32814 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:49.057984   32814 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:49.058315   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:49.058351   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:49.073121   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0914 17:12:49.073537   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:49.074063   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:49.074084   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:49.074410   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:49.074610   32814 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:49.076190   32814 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:49.076205   32814 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:49.076544   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:49.076584   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:49.091101   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0914 17:12:49.091547   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:49.092005   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:49.092023   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:49.092294   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:49.092463   32814 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:49.095203   32814 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:49.095659   32814 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:49.095688   32814 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:49.095904   32814 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:49.096179   32814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:49.096213   32814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:49.110951   32814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
	I0914 17:12:49.111408   32814 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:49.111893   32814 main.go:141] libmachine: Using API Version  1
	I0914 17:12:49.111914   32814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:49.112245   32814 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:49.112451   32814 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:49.112660   32814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:49.112680   32814 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:49.115879   32814 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:49.116295   32814 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:49.116347   32814 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:49.116481   32814 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:49.116659   32814 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:49.116800   32814 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:49.116917   32814 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:49.196944   32814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:49.210818   32814 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 7 (615.639608ms)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:57.521407   32973 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:57.521692   32973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:57.521702   32973 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:57.521708   32973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:57.521890   32973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:12:57.522102   32973 out.go:352] Setting JSON to false
	I0914 17:12:57.522134   32973 mustload.go:65] Loading cluster: ha-929592
	I0914 17:12:57.522207   32973 notify.go:220] Checking for updates...
	I0914 17:12:57.522582   32973 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:12:57.522600   32973 status.go:255] checking status of ha-929592 ...
	I0914 17:12:57.523024   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.523089   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.542301   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0914 17:12:57.542811   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.543484   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.543519   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.543847   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.544046   32973 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:12:57.546081   32973 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:12:57.546094   32973 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:57.546423   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.546458   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.561335   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33545
	I0914 17:12:57.561759   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.562206   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.562228   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.562579   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.562774   32973 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:12:57.566008   32973 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:57.566445   32973 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:57.566468   32973 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:57.566633   32973 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:12:57.567025   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.567068   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.582044   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0914 17:12:57.582543   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.583107   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.583129   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.583429   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.583672   32973 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:12:57.583856   32973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:57.583876   32973 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:12:57.586593   32973 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:57.587066   32973 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:12:57.587109   32973 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:12:57.587220   32973 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:12:57.587400   32973 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:12:57.587530   32973 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:12:57.587667   32973 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:12:57.674037   32973 ssh_runner.go:195] Run: systemctl --version
	I0914 17:12:57.684649   32973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:57.700723   32973 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:57.700763   32973 api_server.go:166] Checking apiserver status ...
	I0914 17:12:57.700812   32973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:57.714861   32973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:12:57.726016   32973 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:57.726081   32973 ssh_runner.go:195] Run: ls
	I0914 17:12:57.730354   32973 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:57.736241   32973 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:57.736270   32973 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:12:57.736280   32973 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:57.736303   32973 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:12:57.736597   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.736639   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.751347   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0914 17:12:57.751843   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.752409   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.752431   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.752758   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.752935   32973 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:12:57.754589   32973 status.go:330] ha-929592-m02 host status = "Stopped" (err=<nil>)
	I0914 17:12:57.754604   32973 status.go:343] host is not running, skipping remaining checks
	I0914 17:12:57.754612   32973 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:57.754634   32973 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:12:57.755082   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.755132   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.771369   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0914 17:12:57.771890   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.772454   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.772479   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.772830   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.773001   32973 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:12:57.774481   32973 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:12:57.774495   32973 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:57.774856   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.774895   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.789597   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I0914 17:12:57.790077   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.790549   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.790574   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.790882   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.791038   32973 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:12:57.793832   32973 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:57.794261   32973 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:57.794309   32973 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:57.794430   32973 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:12:57.794847   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.794897   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.809830   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0914 17:12:57.810287   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.810773   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.810792   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.811124   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.811337   32973 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:12:57.811506   32973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:57.811523   32973 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:12:57.814638   32973 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:57.815104   32973 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:12:57.815125   32973 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:12:57.815307   32973 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:12:57.815457   32973 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:12:57.815621   32973 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:12:57.815742   32973 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:12:57.897607   32973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:57.911327   32973 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:12:57.911353   32973 api_server.go:166] Checking apiserver status ...
	I0914 17:12:57.911384   32973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:12:57.923655   32973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:12:57.933516   32973 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:12:57.933565   32973 ssh_runner.go:195] Run: ls
	I0914 17:12:57.937609   32973 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:12:57.941959   32973 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:12:57.941982   32973 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:12:57.941990   32973 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:57.942004   32973 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:12:57.942361   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.942397   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.957049   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0914 17:12:57.957467   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.958051   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.958071   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.958395   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.958562   32973 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:12:57.960175   32973 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:12:57.960189   32973 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:57.960510   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.960544   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.976409   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0914 17:12:57.976849   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.977267   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.977289   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.977702   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.977894   32973 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:12:57.980852   32973 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:57.981273   32973 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:57.981295   32973 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:57.981526   32973 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:12:57.981949   32973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:12:57.981995   32973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:12:57.996898   32973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I0914 17:12:57.997336   32973 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:12:57.997854   32973 main.go:141] libmachine: Using API Version  1
	I0914 17:12:57.997875   32973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:12:57.998223   32973 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:12:57.998438   32973 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:12:57.998618   32973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:12:57.998640   32973 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:12:58.001307   32973 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:58.001762   32973 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:12:58.001789   32973 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:12:58.001940   32973 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:12:58.002097   32973 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:12:58.002260   32973 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:12:58.002379   32973 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:12:58.081237   32973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:12:58.095728   32973 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 7 (625.874844ms)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929592-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:13:05.547627   33062 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:13:05.547764   33062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:13:05.547776   33062 out.go:358] Setting ErrFile to fd 2...
	I0914 17:13:05.547783   33062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:13:05.547988   33062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:13:05.548168   33062 out.go:352] Setting JSON to false
	I0914 17:13:05.548200   33062 mustload.go:65] Loading cluster: ha-929592
	I0914 17:13:05.548318   33062 notify.go:220] Checking for updates...
	I0914 17:13:05.548760   33062 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:13:05.548782   33062 status.go:255] checking status of ha-929592 ...
	I0914 17:13:05.549240   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.549297   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.569041   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0914 17:13:05.569585   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.570295   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.570327   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.570730   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.570891   33062 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:13:05.572632   33062 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:13:05.572651   33062 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:13:05.573079   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.573126   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.588595   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0914 17:13:05.589067   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.589554   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.589580   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.589877   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.590075   33062 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:13:05.592786   33062 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:13:05.593232   33062 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:13:05.593267   33062 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:13:05.593409   33062 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:13:05.593811   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.593856   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.609705   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0914 17:13:05.610240   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.610747   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.610766   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.611141   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.611305   33062 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:13:05.611504   33062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:13:05.611526   33062 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:13:05.614640   33062 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:13:05.615057   33062 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:13:05.615077   33062 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:13:05.615234   33062 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:13:05.615429   33062 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:13:05.615557   33062 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:13:05.615701   33062 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:13:05.705852   33062 ssh_runner.go:195] Run: systemctl --version
	I0914 17:13:05.715637   33062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:13:05.732023   33062 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:13:05.732065   33062 api_server.go:166] Checking apiserver status ...
	I0914 17:13:05.732113   33062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:13:05.746632   33062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W0914 17:13:05.756711   33062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:13:05.756762   33062 ssh_runner.go:195] Run: ls
	I0914 17:13:05.761736   33062 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:13:05.765730   33062 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:13:05.765751   33062 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:13:05.765763   33062 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:13:05.765794   33062 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:13:05.766200   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.766240   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.782741   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0914 17:13:05.783173   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.783599   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.783619   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.783874   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.784063   33062 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:13:05.785660   33062 status.go:330] ha-929592-m02 host status = "Stopped" (err=<nil>)
	I0914 17:13:05.785672   33062 status.go:343] host is not running, skipping remaining checks
	I0914 17:13:05.785677   33062 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:13:05.785693   33062 status.go:255] checking status of ha-929592-m03 ...
	I0914 17:13:05.786063   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.786110   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.800671   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I0914 17:13:05.801196   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.801637   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.801659   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.802031   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.802226   33062 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:13:05.803675   33062 status.go:330] ha-929592-m03 host status = "Running" (err=<nil>)
	I0914 17:13:05.803690   33062 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:13:05.804075   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.804120   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.818758   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0914 17:13:05.819183   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.819714   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.819741   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.820049   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.820253   33062 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:13:05.822992   33062 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:13:05.823338   33062 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:13:05.823358   33062 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:13:05.823483   33062 host.go:66] Checking if "ha-929592-m03" exists ...
	I0914 17:13:05.823897   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.823942   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.840399   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46589
	I0914 17:13:05.840838   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.841294   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.841315   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.841610   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.841792   33062 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:13:05.841978   33062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:13:05.841999   33062 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:13:05.844663   33062 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:13:05.845023   33062 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:13:05.845041   33062 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:13:05.845126   33062 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:13:05.845308   33062 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:13:05.845463   33062 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:13:05.845593   33062 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:13:05.925453   33062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:13:05.940927   33062 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:13:05.940953   33062 api_server.go:166] Checking apiserver status ...
	I0914 17:13:05.940982   33062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:13:05.954048   33062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	W0914 17:13:05.963868   33062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:13:05.963917   33062 ssh_runner.go:195] Run: ls
	I0914 17:13:05.968655   33062 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:13:05.972987   33062 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:13:05.973013   33062 status.go:422] ha-929592-m03 apiserver status = Running (err=<nil>)
	I0914 17:13:05.973023   33062 status.go:257] ha-929592-m03 status: &{Name:ha-929592-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:13:05.973042   33062 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:13:05.973389   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.973431   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:05.988232   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0914 17:13:05.988655   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:05.989138   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:05.989158   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:05.989467   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:05.989644   33062 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:13:05.991072   33062 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:13:05.991085   33062 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:13:05.991393   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:05.991433   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:06.006759   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0914 17:13:06.007236   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:06.007701   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:06.007726   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:06.008033   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:06.008207   33062 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:13:06.010835   33062 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:13:06.011223   33062 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:13:06.011248   33062 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:13:06.011412   33062 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:13:06.011746   33062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:06.011786   33062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:06.026698   33062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0914 17:13:06.027138   33062 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:06.027625   33062 main.go:141] libmachine: Using API Version  1
	I0914 17:13:06.027644   33062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:06.027977   33062 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:06.028128   33062 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:13:06.028264   33062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:13:06.028284   33062 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:13:06.031034   33062 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:13:06.031400   33062 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:13:06.031430   33062 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:13:06.031570   33062 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:13:06.031720   33062 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:13:06.031859   33062 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:13:06.031980   33062 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:13:06.114529   33062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:13:06.129812   33062 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-929592 -n ha-929592
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-929592 logs -n 25: (1.336962952s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592:/home/docker/cp-test_ha-929592-m03_ha-929592.txt                      |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592 sudo cat                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592.txt                                |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m04 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp testdata/cp-test.txt                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592:/home/docker/cp-test_ha-929592-m04_ha-929592.txt                      |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592 sudo cat                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592.txt                                |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03:/home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m03 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-929592 node stop m02 -v=7                                                    | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-929592 node start m02 -v=7                                                   | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:04:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:04:52.362054   27433 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:04:52.362146   27433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:52.362153   27433 out.go:358] Setting ErrFile to fd 2...
	I0914 17:04:52.362178   27433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:52.362345   27433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:04:52.362903   27433 out.go:352] Setting JSON to false
	I0914 17:04:52.363751   27433 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2836,"bootTime":1726330656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:04:52.363836   27433 start.go:139] virtualization: kvm guest
	I0914 17:04:52.365931   27433 out.go:177] * [ha-929592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:04:52.367340   27433 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:04:52.367368   27433 notify.go:220] Checking for updates...
	I0914 17:04:52.369803   27433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:04:52.371197   27433 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:04:52.372343   27433 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:52.373702   27433 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:04:52.375185   27433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:04:52.376686   27433 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:04:52.411200   27433 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 17:04:52.412455   27433 start.go:297] selected driver: kvm2
	I0914 17:04:52.412471   27433 start.go:901] validating driver "kvm2" against <nil>
	I0914 17:04:52.412482   27433 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:04:52.413158   27433 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:04:52.413241   27433 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:04:52.428264   27433 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:04:52.428311   27433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:04:52.428555   27433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:04:52.428590   27433 cni.go:84] Creating CNI manager for ""
	I0914 17:04:52.428628   27433 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0914 17:04:52.428637   27433 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 17:04:52.428695   27433 start.go:340] cluster config:
	{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0914 17:04:52.428780   27433 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:04:52.430437   27433 out.go:177] * Starting "ha-929592" primary control-plane node in "ha-929592" cluster
	I0914 17:04:52.431767   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:04:52.431815   27433 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 17:04:52.431830   27433 cache.go:56] Caching tarball of preloaded images
	I0914 17:04:52.431915   27433 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:04:52.431928   27433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:04:52.432228   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:04:52.432252   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json: {Name:mk927977c49e49be76a6abcc15d8cb1926577c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:04:52.432402   27433 start.go:360] acquireMachinesLock for ha-929592: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:04:52.432445   27433 start.go:364] duration metric: took 26.853µs to acquireMachinesLock for "ha-929592"
	I0914 17:04:52.432468   27433 start.go:93] Provisioning new machine with config: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:04:52.432530   27433 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 17:04:52.434080   27433 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:04:52.434231   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:04:52.434275   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:04:52.448453   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0914 17:04:52.448925   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:04:52.449473   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:04:52.449492   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:04:52.449795   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:04:52.449949   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:04:52.450074   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:04:52.450204   27433 start.go:159] libmachine.API.Create for "ha-929592" (driver="kvm2")
	I0914 17:04:52.450257   27433 client.go:168] LocalClient.Create starting
	I0914 17:04:52.450297   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:04:52.450339   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:04:52.450352   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:04:52.450410   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:04:52.450428   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:04:52.450446   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:04:52.450462   27433 main.go:141] libmachine: Running pre-create checks...
	I0914 17:04:52.450469   27433 main.go:141] libmachine: (ha-929592) Calling .PreCreateCheck
	I0914 17:04:52.450755   27433 main.go:141] libmachine: (ha-929592) Calling .GetConfigRaw
	I0914 17:04:52.451089   27433 main.go:141] libmachine: Creating machine...
	I0914 17:04:52.451101   27433 main.go:141] libmachine: (ha-929592) Calling .Create
	I0914 17:04:52.451265   27433 main.go:141] libmachine: (ha-929592) Creating KVM machine...
	I0914 17:04:52.452544   27433 main.go:141] libmachine: (ha-929592) DBG | found existing default KVM network
	I0914 17:04:52.453240   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.453090   27456 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I0914 17:04:52.453254   27433 main.go:141] libmachine: (ha-929592) DBG | created network xml: 
	I0914 17:04:52.453263   27433 main.go:141] libmachine: (ha-929592) DBG | <network>
	I0914 17:04:52.453268   27433 main.go:141] libmachine: (ha-929592) DBG |   <name>mk-ha-929592</name>
	I0914 17:04:52.453273   27433 main.go:141] libmachine: (ha-929592) DBG |   <dns enable='no'/>
	I0914 17:04:52.453277   27433 main.go:141] libmachine: (ha-929592) DBG |   
	I0914 17:04:52.453282   27433 main.go:141] libmachine: (ha-929592) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0914 17:04:52.453287   27433 main.go:141] libmachine: (ha-929592) DBG |     <dhcp>
	I0914 17:04:52.453296   27433 main.go:141] libmachine: (ha-929592) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0914 17:04:52.453305   27433 main.go:141] libmachine: (ha-929592) DBG |     </dhcp>
	I0914 17:04:52.453332   27433 main.go:141] libmachine: (ha-929592) DBG |   </ip>
	I0914 17:04:52.453342   27433 main.go:141] libmachine: (ha-929592) DBG |   
	I0914 17:04:52.453348   27433 main.go:141] libmachine: (ha-929592) DBG | </network>
	I0914 17:04:52.453354   27433 main.go:141] libmachine: (ha-929592) DBG | 
	I0914 17:04:52.458689   27433 main.go:141] libmachine: (ha-929592) DBG | trying to create private KVM network mk-ha-929592 192.168.39.0/24...
	I0914 17:04:52.525127   27433 main.go:141] libmachine: (ha-929592) DBG | private KVM network mk-ha-929592 192.168.39.0/24 created
	I0914 17:04:52.525229   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.525091   27456 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:52.525274   27433 main.go:141] libmachine: (ha-929592) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592 ...
	I0914 17:04:52.525325   27433 main.go:141] libmachine: (ha-929592) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:04:52.525357   27433 main.go:141] libmachine: (ha-929592) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:04:52.774096   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.773983   27456 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa...
	I0914 17:04:52.881126   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.880973   27456 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/ha-929592.rawdisk...
	I0914 17:04:52.881154   27433 main.go:141] libmachine: (ha-929592) DBG | Writing magic tar header
	I0914 17:04:52.881164   27433 main.go:141] libmachine: (ha-929592) DBG | Writing SSH key tar header
	I0914 17:04:52.881177   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:52.881094   27456 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592 ...
	I0914 17:04:52.881188   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592
	I0914 17:04:52.881234   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592 (perms=drwx------)
	I0914 17:04:52.881256   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:04:52.881264   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:04:52.881273   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:52.881279   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:04:52.881285   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:04:52.881291   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:04:52.881298   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:04:52.881309   27433 main.go:141] libmachine: (ha-929592) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:04:52.881316   27433 main.go:141] libmachine: (ha-929592) Creating domain...
	I0914 17:04:52.881324   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:04:52.881329   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:04:52.881354   27433 main.go:141] libmachine: (ha-929592) DBG | Checking permissions on dir: /home
	I0914 17:04:52.881378   27433 main.go:141] libmachine: (ha-929592) DBG | Skipping /home - not owner
	I0914 17:04:52.882446   27433 main.go:141] libmachine: (ha-929592) define libvirt domain using xml: 
	I0914 17:04:52.882460   27433 main.go:141] libmachine: (ha-929592) <domain type='kvm'>
	I0914 17:04:52.882465   27433 main.go:141] libmachine: (ha-929592)   <name>ha-929592</name>
	I0914 17:04:52.882470   27433 main.go:141] libmachine: (ha-929592)   <memory unit='MiB'>2200</memory>
	I0914 17:04:52.882475   27433 main.go:141] libmachine: (ha-929592)   <vcpu>2</vcpu>
	I0914 17:04:52.882479   27433 main.go:141] libmachine: (ha-929592)   <features>
	I0914 17:04:52.882483   27433 main.go:141] libmachine: (ha-929592)     <acpi/>
	I0914 17:04:52.882486   27433 main.go:141] libmachine: (ha-929592)     <apic/>
	I0914 17:04:52.882491   27433 main.go:141] libmachine: (ha-929592)     <pae/>
	I0914 17:04:52.882499   27433 main.go:141] libmachine: (ha-929592)     
	I0914 17:04:52.882504   27433 main.go:141] libmachine: (ha-929592)   </features>
	I0914 17:04:52.882510   27433 main.go:141] libmachine: (ha-929592)   <cpu mode='host-passthrough'>
	I0914 17:04:52.882515   27433 main.go:141] libmachine: (ha-929592)   
	I0914 17:04:52.882521   27433 main.go:141] libmachine: (ha-929592)   </cpu>
	I0914 17:04:52.882528   27433 main.go:141] libmachine: (ha-929592)   <os>
	I0914 17:04:52.882537   27433 main.go:141] libmachine: (ha-929592)     <type>hvm</type>
	I0914 17:04:52.882571   27433 main.go:141] libmachine: (ha-929592)     <boot dev='cdrom'/>
	I0914 17:04:52.882588   27433 main.go:141] libmachine: (ha-929592)     <boot dev='hd'/>
	I0914 17:04:52.882595   27433 main.go:141] libmachine: (ha-929592)     <bootmenu enable='no'/>
	I0914 17:04:52.882600   27433 main.go:141] libmachine: (ha-929592)   </os>
	I0914 17:04:52.882605   27433 main.go:141] libmachine: (ha-929592)   <devices>
	I0914 17:04:52.882628   27433 main.go:141] libmachine: (ha-929592)     <disk type='file' device='cdrom'>
	I0914 17:04:52.882647   27433 main.go:141] libmachine: (ha-929592)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/boot2docker.iso'/>
	I0914 17:04:52.882656   27433 main.go:141] libmachine: (ha-929592)       <target dev='hdc' bus='scsi'/>
	I0914 17:04:52.882665   27433 main.go:141] libmachine: (ha-929592)       <readonly/>
	I0914 17:04:52.882672   27433 main.go:141] libmachine: (ha-929592)     </disk>
	I0914 17:04:52.882686   27433 main.go:141] libmachine: (ha-929592)     <disk type='file' device='disk'>
	I0914 17:04:52.882693   27433 main.go:141] libmachine: (ha-929592)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:04:52.882714   27433 main.go:141] libmachine: (ha-929592)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/ha-929592.rawdisk'/>
	I0914 17:04:52.882722   27433 main.go:141] libmachine: (ha-929592)       <target dev='hda' bus='virtio'/>
	I0914 17:04:52.882743   27433 main.go:141] libmachine: (ha-929592)     </disk>
	I0914 17:04:52.882758   27433 main.go:141] libmachine: (ha-929592)     <interface type='network'>
	I0914 17:04:52.882772   27433 main.go:141] libmachine: (ha-929592)       <source network='mk-ha-929592'/>
	I0914 17:04:52.882783   27433 main.go:141] libmachine: (ha-929592)       <model type='virtio'/>
	I0914 17:04:52.882792   27433 main.go:141] libmachine: (ha-929592)     </interface>
	I0914 17:04:52.882799   27433 main.go:141] libmachine: (ha-929592)     <interface type='network'>
	I0914 17:04:52.882806   27433 main.go:141] libmachine: (ha-929592)       <source network='default'/>
	I0914 17:04:52.882813   27433 main.go:141] libmachine: (ha-929592)       <model type='virtio'/>
	I0914 17:04:52.882825   27433 main.go:141] libmachine: (ha-929592)     </interface>
	I0914 17:04:52.882838   27433 main.go:141] libmachine: (ha-929592)     <serial type='pty'>
	I0914 17:04:52.882854   27433 main.go:141] libmachine: (ha-929592)       <target port='0'/>
	I0914 17:04:52.882873   27433 main.go:141] libmachine: (ha-929592)     </serial>
	I0914 17:04:52.882886   27433 main.go:141] libmachine: (ha-929592)     <console type='pty'>
	I0914 17:04:52.882898   27433 main.go:141] libmachine: (ha-929592)       <target type='serial' port='0'/>
	I0914 17:04:52.882913   27433 main.go:141] libmachine: (ha-929592)     </console>
	I0914 17:04:52.882926   27433 main.go:141] libmachine: (ha-929592)     <rng model='virtio'>
	I0914 17:04:52.882934   27433 main.go:141] libmachine: (ha-929592)       <backend model='random'>/dev/random</backend>
	I0914 17:04:52.882945   27433 main.go:141] libmachine: (ha-929592)     </rng>
	I0914 17:04:52.882959   27433 main.go:141] libmachine: (ha-929592)     
	I0914 17:04:52.882968   27433 main.go:141] libmachine: (ha-929592)     
	I0914 17:04:52.882983   27433 main.go:141] libmachine: (ha-929592)   </devices>
	I0914 17:04:52.883000   27433 main.go:141] libmachine: (ha-929592) </domain>
	I0914 17:04:52.883015   27433 main.go:141] libmachine: (ha-929592) 
	I0914 17:04:52.887250   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:22:db:e9 in network default
	I0914 17:04:52.887768   27433 main.go:141] libmachine: (ha-929592) Ensuring networks are active...
	I0914 17:04:52.887783   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:52.888465   27433 main.go:141] libmachine: (ha-929592) Ensuring network default is active
	I0914 17:04:52.888708   27433 main.go:141] libmachine: (ha-929592) Ensuring network mk-ha-929592 is active
	I0914 17:04:52.889130   27433 main.go:141] libmachine: (ha-929592) Getting domain xml...
	I0914 17:04:52.889771   27433 main.go:141] libmachine: (ha-929592) Creating domain...
	I0914 17:04:54.076007   27433 main.go:141] libmachine: (ha-929592) Waiting to get IP...
	I0914 17:04:54.076817   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:54.077204   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:54.077232   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:54.077176   27456 retry.go:31] will retry after 289.776154ms: waiting for machine to come up
	I0914 17:04:54.368800   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:54.369197   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:54.369231   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:54.369159   27456 retry.go:31] will retry after 265.691042ms: waiting for machine to come up
	I0914 17:04:54.636587   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:54.637014   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:54.637035   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:54.636957   27456 retry.go:31] will retry after 390.775829ms: waiting for machine to come up
	I0914 17:04:55.029563   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:55.030053   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:55.030087   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:55.030001   27456 retry.go:31] will retry after 506.591115ms: waiting for machine to come up
	I0914 17:04:55.538684   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:55.539180   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:55.539200   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:55.539139   27456 retry.go:31] will retry after 621.472095ms: waiting for machine to come up
	I0914 17:04:56.162029   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:56.162541   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:56.162566   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:56.162479   27456 retry.go:31] will retry after 848.82904ms: waiting for machine to come up
	I0914 17:04:57.013633   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:57.014033   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:57.014061   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:57.013991   27456 retry.go:31] will retry after 880.018076ms: waiting for machine to come up
	I0914 17:04:57.895459   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:57.895811   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:57.895841   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:57.895774   27456 retry.go:31] will retry after 1.44160062s: waiting for machine to come up
	I0914 17:04:59.339444   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:04:59.339868   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:04:59.339895   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:04:59.339826   27456 retry.go:31] will retry after 1.541818405s: waiting for machine to come up
	I0914 17:05:00.883498   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:00.883924   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:00.883952   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:00.883880   27456 retry.go:31] will retry after 1.975015362s: waiting for machine to come up
	I0914 17:05:02.860808   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:02.861230   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:02.861255   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:02.861183   27456 retry.go:31] will retry after 2.375239154s: waiting for machine to come up
	I0914 17:05:05.239145   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:05.239513   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:05.239541   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:05.239466   27456 retry.go:31] will retry after 3.274936242s: waiting for machine to come up
	I0914 17:05:08.516310   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:08.516591   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find current IP address of domain ha-929592 in network mk-ha-929592
	I0914 17:05:08.516616   27433 main.go:141] libmachine: (ha-929592) DBG | I0914 17:05:08.516555   27456 retry.go:31] will retry after 3.972681773s: waiting for machine to come up
	I0914 17:05:12.490473   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.490970   27433 main.go:141] libmachine: (ha-929592) Found IP for machine: 192.168.39.54
	I0914 17:05:12.490998   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has current primary IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.491007   27433 main.go:141] libmachine: (ha-929592) Reserving static IP address...
	I0914 17:05:12.491334   27433 main.go:141] libmachine: (ha-929592) DBG | unable to find host DHCP lease matching {name: "ha-929592", mac: "52:54:00:5c:cb:09", ip: "192.168.39.54"} in network mk-ha-929592
	I0914 17:05:12.563614   27433 main.go:141] libmachine: (ha-929592) DBG | Getting to WaitForSSH function...
	I0914 17:05:12.563645   27433 main.go:141] libmachine: (ha-929592) Reserved static IP address: 192.168.39.54
	I0914 17:05:12.563685   27433 main.go:141] libmachine: (ha-929592) Waiting for SSH to be available...
	I0914 17:05:12.566031   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.566381   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.566408   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.566585   27433 main.go:141] libmachine: (ha-929592) DBG | Using SSH client type: external
	I0914 17:05:12.566611   27433 main.go:141] libmachine: (ha-929592) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa (-rw-------)
	I0914 17:05:12.566652   27433 main.go:141] libmachine: (ha-929592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:05:12.566667   27433 main.go:141] libmachine: (ha-929592) DBG | About to run SSH command:
	I0914 17:05:12.566679   27433 main.go:141] libmachine: (ha-929592) DBG | exit 0
	I0914 17:05:12.693896   27433 main.go:141] libmachine: (ha-929592) DBG | SSH cmd err, output: <nil>: 
	I0914 17:05:12.694183   27433 main.go:141] libmachine: (ha-929592) KVM machine creation complete!
	I0914 17:05:12.694564   27433 main.go:141] libmachine: (ha-929592) Calling .GetConfigRaw
	I0914 17:05:12.695129   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:12.695377   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:12.695534   27433 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:05:12.695545   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:12.696807   27433 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:05:12.696834   27433 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:05:12.696840   27433 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:05:12.696848   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:12.699238   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.699685   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.699706   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.699954   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:12.700173   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.700340   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.700444   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:12.700611   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:12.700834   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:12.700846   27433 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:05:12.813402   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:05:12.813419   27433 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:05:12.813429   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:12.816165   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.816480   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.816510   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.816646   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:12.816829   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.816985   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.817152   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:12.817395   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:12.817600   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:12.817612   27433 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:05:12.930731   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:05:12.930824   27433 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:05:12.930835   27433 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:05:12.930843   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:05:12.931142   27433 buildroot.go:166] provisioning hostname "ha-929592"
	I0914 17:05:12.931171   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:05:12.931415   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:12.933748   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.934109   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:12.934135   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:12.934298   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:12.934477   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.934649   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:12.934767   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:12.934902   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:12.935083   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:12.935094   27433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592 && echo "ha-929592" | sudo tee /etc/hostname
	I0914 17:05:13.059342   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592
	
	I0914 17:05:13.059386   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.061780   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.062095   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.062117   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.062309   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.062487   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.062631   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.062767   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.062932   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:13.063135   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:13.063150   27433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:05:13.182217   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:05:13.182265   27433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:05:13.182300   27433 buildroot.go:174] setting up certificates
	I0914 17:05:13.182319   27433 provision.go:84] configureAuth start
	I0914 17:05:13.182336   27433 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:05:13.182615   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:13.184832   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.185124   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.185140   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.185249   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.187224   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.187592   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.187634   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.187774   27433 provision.go:143] copyHostCerts
	I0914 17:05:13.187801   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:05:13.187836   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:05:13.187882   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:05:13.187999   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:05:13.188102   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:05:13.188128   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:05:13.188137   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:05:13.188175   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:05:13.188246   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:05:13.188294   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:05:13.188303   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:05:13.188351   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:05:13.188419   27433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592 san=[127.0.0.1 192.168.39.54 ha-929592 localhost minikube]
	I0914 17:05:13.281204   27433 provision.go:177] copyRemoteCerts
	I0914 17:05:13.281259   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:05:13.281281   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.283676   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.283872   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.283891   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.284055   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.284221   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.284422   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.284519   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:13.372119   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:05:13.372192   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0914 17:05:13.395483   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:05:13.395565   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:05:13.418066   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:05:13.418142   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:05:13.440380   27433 provision.go:87] duration metric: took 258.044352ms to configureAuth
	I0914 17:05:13.440405   27433 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:05:13.440613   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:05:13.440692   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.442993   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.443286   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.443318   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.443526   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.443705   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.443810   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.443949   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.444095   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:13.444283   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:13.444306   27433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:05:13.668767   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:05:13.668796   27433 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:05:13.668809   27433 main.go:141] libmachine: (ha-929592) Calling .GetURL
	I0914 17:05:13.670071   27433 main.go:141] libmachine: (ha-929592) DBG | Using libvirt version 6000000
	I0914 17:05:13.672133   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.672425   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.672453   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.672635   27433 main.go:141] libmachine: Docker is up and running!
	I0914 17:05:13.672649   27433 main.go:141] libmachine: Reticulating splines...
	I0914 17:05:13.672655   27433 client.go:171] duration metric: took 21.222387818s to LocalClient.Create
	I0914 17:05:13.672674   27433 start.go:167] duration metric: took 21.222472014s to libmachine.API.Create "ha-929592"
	I0914 17:05:13.672682   27433 start.go:293] postStartSetup for "ha-929592" (driver="kvm2")
	I0914 17:05:13.672691   27433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:05:13.672705   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.672956   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:05:13.672979   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.674989   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.675256   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.675278   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.675426   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.675576   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.675699   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.675809   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:13.760460   27433 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:05:13.764480   27433 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:05:13.764512   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:05:13.764574   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:05:13.764675   27433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:05:13.764689   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:05:13.764796   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:05:13.773804   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:05:13.802132   27433 start.go:296] duration metric: took 129.43692ms for postStartSetup
	I0914 17:05:13.802201   27433 main.go:141] libmachine: (ha-929592) Calling .GetConfigRaw
	I0914 17:05:13.802929   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:13.805341   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.805638   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.805665   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.805869   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:05:13.806035   27433 start.go:128] duration metric: took 21.373494072s to createHost
	I0914 17:05:13.806054   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.808526   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.808873   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.808900   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.809020   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.809200   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.809343   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.809458   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.809615   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:13.809793   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:05:13.809806   27433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:05:13.922612   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726333513.897189242
	
	I0914 17:05:13.922637   27433 fix.go:216] guest clock: 1726333513.897189242
	I0914 17:05:13.922645   27433 fix.go:229] Guest: 2024-09-14 17:05:13.897189242 +0000 UTC Remote: 2024-09-14 17:05:13.806045002 +0000 UTC m=+21.477242677 (delta=91.14424ms)
	I0914 17:05:13.922688   27433 fix.go:200] guest clock delta is within tolerance: 91.14424ms
	I0914 17:05:13.922696   27433 start.go:83] releasing machines lock for "ha-929592", held for 21.490239455s
	I0914 17:05:13.922722   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.922955   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:13.925674   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.926017   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.926040   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.926209   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.926806   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.926983   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:13.927099   27433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:05:13.927145   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.927189   27433 ssh_runner.go:195] Run: cat /version.json
	I0914 17:05:13.927212   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:13.929964   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930096   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930382   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.930410   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930523   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:13.930546   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:13.930575   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.930693   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:13.930769   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.930789   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:13.930927   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.930932   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:13.931033   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:13.931078   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:14.039985   27433 ssh_runner.go:195] Run: systemctl --version
	I0914 17:05:14.045861   27433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:05:14.202332   27433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:05:14.208032   27433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:05:14.208097   27433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:05:14.224174   27433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:05:14.224197   27433 start.go:495] detecting cgroup driver to use...
	I0914 17:05:14.224263   27433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:05:14.240804   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:05:14.254062   27433 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:05:14.254113   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:05:14.267269   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:05:14.280412   27433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:05:14.389375   27433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:05:14.542112   27433 docker.go:233] disabling docker service ...
	I0914 17:05:14.542194   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:05:14.555724   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:05:14.567773   27433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:05:14.695885   27433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:05:14.828486   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:05:14.841740   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:05:14.859848   27433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:05:14.859924   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.870387   27433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:05:14.870468   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.880584   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.890449   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.900203   27433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:05:14.910750   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.920469   27433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.936981   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:05:14.947452   27433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:05:14.956918   27433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:05:14.956978   27433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:05:14.968884   27433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:05:14.978656   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:05:15.098602   27433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:05:15.183490   27433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:05:15.183560   27433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:05:15.187992   27433 start.go:563] Will wait 60s for crictl version
	I0914 17:05:15.188052   27433 ssh_runner.go:195] Run: which crictl
	I0914 17:05:15.191667   27433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:05:15.229963   27433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:05:15.230059   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:05:15.259743   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:05:15.289467   27433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:05:15.291045   27433 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:05:15.293584   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:15.293883   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:15.293901   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:15.294141   27433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:05:15.298491   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:05:15.311225   27433 kubeadm.go:883] updating cluster {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:05:15.311331   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:05:15.311373   27433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:05:15.343052   27433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 17:05:15.343113   27433 ssh_runner.go:195] Run: which lz4
	I0914 17:05:15.346935   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0914 17:05:15.347018   27433 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 17:05:15.351018   27433 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 17:05:15.351055   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 17:05:16.543497   27433 crio.go:462] duration metric: took 1.196498878s to copy over tarball
	I0914 17:05:16.543571   27433 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 17:05:18.520730   27433 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977128894s)
	I0914 17:05:18.520768   27433 crio.go:469] duration metric: took 1.977245938s to extract the tarball
	I0914 17:05:18.520779   27433 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 17:05:18.556314   27433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:05:18.598630   27433 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:05:18.598656   27433 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:05:18.598666   27433 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0914 17:05:18.598778   27433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:05:18.598841   27433 ssh_runner.go:195] Run: crio config
	I0914 17:05:18.643561   27433 cni.go:84] Creating CNI manager for ""
	I0914 17:05:18.643580   27433 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 17:05:18.643589   27433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:05:18.643609   27433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-929592 NodeName:ha-929592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:05:18.643735   27433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-929592"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:05:18.643764   27433 kube-vip.go:115] generating kube-vip config ...
	I0914 17:05:18.643803   27433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:05:18.659498   27433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:05:18.659626   27433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:05:18.659687   27433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:05:18.669124   27433 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:05:18.669186   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 17:05:18.678492   27433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0914 17:05:18.694270   27433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:05:18.709635   27433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0914 17:05:18.725145   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0914 17:05:18.740755   27433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:05:18.744332   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:05:18.755630   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:05:18.868873   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:05:18.885268   27433 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.54
	I0914 17:05:18.885293   27433 certs.go:194] generating shared ca certs ...
	I0914 17:05:18.885315   27433 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:18.885509   27433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:05:18.885567   27433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:05:18.885580   27433 certs.go:256] generating profile certs ...
	I0914 17:05:18.885640   27433 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:05:18.885667   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt with IP's: []
	I0914 17:05:19.132478   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt ...
	I0914 17:05:19.132513   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt: {Name:mk54c9566b78ae48c2ae4c2a1b029e7d573c0c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.132674   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key ...
	I0914 17:05:19.132683   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key: {Name:mk4627546c29d8132adefa948bb74cf246c39702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.132757   27433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383
	I0914 17:05:19.132771   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.254]
	I0914 17:05:19.378339   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383 ...
	I0914 17:05:19.378369   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383: {Name:mk917bd493eb4252b59420c304591247a8797944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.378528   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383 ...
	I0914 17:05:19.378542   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383: {Name:mk063999a82be1870a27e4e9637b0675bcfe2750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.378613   27433 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.90aea383 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:05:19.378702   27433 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.90aea383 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:05:19.378755   27433 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:05:19.378770   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt with IP's: []
	I0914 17:05:19.519778   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt ...
	I0914 17:05:19.519809   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt: {Name:mk26ab7b30268ecdbdb0a5c3970d6da8a5fc24f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.519957   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key ...
	I0914 17:05:19.519967   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key: {Name:mkd9e9e56ad626cbe3ea15682b1f7c52cdbd81c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:19.520072   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:05:19.520088   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:05:19.520099   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:05:19.520113   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:05:19.520145   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:05:19.520159   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:05:19.520171   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:05:19.520184   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:05:19.520229   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:05:19.520260   27433 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:05:19.520269   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:05:19.520295   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:05:19.520322   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:05:19.520343   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:05:19.520380   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:05:19.520404   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.520422   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.520437   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.520976   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:05:19.545186   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:05:19.568589   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:05:19.593254   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:05:19.616378   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 17:05:19.641070   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:05:19.667208   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:05:19.700432   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:05:19.725080   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:05:19.747406   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:05:19.770099   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:05:19.793257   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:05:19.809344   27433 ssh_runner.go:195] Run: openssl version
	I0914 17:05:19.815041   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:05:19.825746   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.829941   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.829998   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:05:19.835403   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:05:19.846249   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:05:19.857197   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.861444   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.861493   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:05:19.866827   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:05:19.877466   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:05:19.888231   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.892457   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.892517   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:05:19.898027   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:05:19.909064   27433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:05:19.913022   27433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:05:19.913080   27433 kubeadm.go:392] StartCluster: {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:05:19.913140   27433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:05:19.913197   27433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:05:19.953075   27433 cri.go:89] found id: ""
	I0914 17:05:19.953159   27433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 17:05:19.962939   27433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 17:05:19.972418   27433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 17:05:19.981720   27433 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 17:05:19.981739   27433 kubeadm.go:157] found existing configuration files:
	
	I0914 17:05:19.981779   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 17:05:19.990455   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 17:05:19.990520   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 17:05:19.999755   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 17:05:20.008502   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 17:05:20.008558   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 17:05:20.017608   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 17:05:20.026183   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 17:05:20.026237   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 17:05:20.035009   27433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 17:05:20.043331   27433 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 17:05:20.043381   27433 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 17:05:20.052637   27433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 17:05:20.151886   27433 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 17:05:20.152003   27433 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 17:05:20.270747   27433 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 17:05:20.270932   27433 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 17:05:20.271051   27433 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 17:05:20.279190   27433 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 17:05:20.331860   27433 out.go:235]   - Generating certificates and keys ...
	I0914 17:05:20.331982   27433 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 17:05:20.332065   27433 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 17:05:20.378810   27433 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 17:05:20.487711   27433 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 17:05:20.688491   27433 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 17:05:20.981539   27433 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 17:05:21.067314   27433 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 17:05:21.067685   27433 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-929592 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0914 17:05:21.216228   27433 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 17:05:21.216639   27433 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-929592 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0914 17:05:21.378027   27433 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 17:05:21.815304   27433 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 17:05:21.898368   27433 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 17:05:21.898707   27433 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 17:05:22.029236   27433 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 17:05:22.119811   27433 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 17:05:22.386426   27433 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 17:05:22.439748   27433 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 17:05:22.702524   27433 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 17:05:22.703297   27433 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 17:05:22.706959   27433 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 17:05:22.708786   27433 out.go:235]   - Booting up control plane ...
	I0914 17:05:22.708887   27433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 17:05:22.710820   27433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 17:05:22.711656   27433 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 17:05:22.726607   27433 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 17:05:22.732633   27433 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 17:05:22.732708   27433 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 17:05:22.872776   27433 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 17:05:22.872910   27433 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 17:05:23.374803   27433 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.315802ms
	I0914 17:05:23.374911   27433 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 17:05:29.331478   27433 kubeadm.go:310] [api-check] The API server is healthy after 5.958547603s
	I0914 17:05:29.341859   27433 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 17:05:29.355652   27433 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 17:05:29.384741   27433 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 17:05:29.384956   27433 kubeadm.go:310] [mark-control-plane] Marking the node ha-929592 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 17:05:29.403402   27433 kubeadm.go:310] [bootstrap-token] Using token: kz9zjv.9vz6qx71da3375jr
	I0914 17:05:29.404608   27433 out.go:235]   - Configuring RBAC rules ...
	I0914 17:05:29.404755   27433 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 17:05:29.412435   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 17:05:29.425683   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 17:05:29.432156   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 17:05:29.435728   27433 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 17:05:29.441992   27433 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 17:05:29.741459   27433 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 17:05:30.169011   27433 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 17:05:30.739086   27433 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 17:05:30.739909   27433 kubeadm.go:310] 
	I0914 17:05:30.739982   27433 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 17:05:30.739991   27433 kubeadm.go:310] 
	I0914 17:05:30.740112   27433 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 17:05:30.740140   27433 kubeadm.go:310] 
	I0914 17:05:30.740172   27433 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 17:05:30.740248   27433 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 17:05:30.740313   27433 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 17:05:30.740322   27433 kubeadm.go:310] 
	I0914 17:05:30.740400   27433 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 17:05:30.740414   27433 kubeadm.go:310] 
	I0914 17:05:30.740485   27433 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 17:05:30.740499   27433 kubeadm.go:310] 
	I0914 17:05:30.740586   27433 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 17:05:30.740708   27433 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 17:05:30.740812   27433 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 17:05:30.740820   27433 kubeadm.go:310] 
	I0914 17:05:30.740920   27433 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 17:05:30.741030   27433 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 17:05:30.741040   27433 kubeadm.go:310] 
	I0914 17:05:30.741163   27433 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kz9zjv.9vz6qx71da3375jr \
	I0914 17:05:30.741331   27433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 17:05:30.741381   27433 kubeadm.go:310] 	--control-plane 
	I0914 17:05:30.741391   27433 kubeadm.go:310] 
	I0914 17:05:30.741480   27433 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 17:05:30.741490   27433 kubeadm.go:310] 
	I0914 17:05:30.741610   27433 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kz9zjv.9vz6qx71da3375jr \
	I0914 17:05:30.741767   27433 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 17:05:30.742153   27433 kubeadm.go:310] W0914 17:05:20.130501     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 17:05:30.742508   27433 kubeadm.go:310] W0914 17:05:20.131686     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 17:05:30.742657   27433 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 17:05:30.742707   27433 cni.go:84] Creating CNI manager for ""
	I0914 17:05:30.742723   27433 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0914 17:05:30.744429   27433 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 17:05:30.745679   27433 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 17:05:30.751964   27433 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 17:05:30.751988   27433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 17:05:30.770060   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 17:05:31.164465   27433 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 17:05:31.164521   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:31.164615   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-929592 minikube.k8s.io/updated_at=2024_09_14T17_05_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=ha-929592 minikube.k8s.io/primary=true
	I0914 17:05:31.316903   27433 ops.go:34] apiserver oom_adj: -16
	I0914 17:05:31.320198   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:31.820892   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:32.321061   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:32.821075   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:33.321063   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:33.821156   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:34.320520   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:05:34.439647   27433 kubeadm.go:1113] duration metric: took 3.27517461s to wait for elevateKubeSystemPrivileges
	I0914 17:05:34.439682   27433 kubeadm.go:394] duration metric: took 14.526605759s to StartCluster
	I0914 17:05:34.439701   27433 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:34.439783   27433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:05:34.440673   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:05:34.440870   27433 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:05:34.440890   27433 start.go:241] waiting for startup goroutines ...
	I0914 17:05:34.440898   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 17:05:34.440903   27433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 17:05:34.440974   27433 addons.go:69] Setting storage-provisioner=true in profile "ha-929592"
	I0914 17:05:34.440989   27433 addons.go:234] Setting addon storage-provisioner=true in "ha-929592"
	I0914 17:05:34.440994   27433 addons.go:69] Setting default-storageclass=true in profile "ha-929592"
	I0914 17:05:34.441011   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:05:34.441013   27433 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-929592"
	I0914 17:05:34.441090   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:05:34.441463   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.441470   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.441510   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.441513   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.457224   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0914 17:05:34.457313   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0914 17:05:34.457897   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.457910   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.458408   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.458429   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.458552   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.458575   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.458783   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.458907   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.459076   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:34.459303   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.459339   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.461237   27433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:05:34.461492   27433 kapi.go:59] client config for ha-929592: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 17:05:34.461940   27433 cert_rotation.go:140] Starting client certificate rotation controller
	I0914 17:05:34.462181   27433 addons.go:234] Setting addon default-storageclass=true in "ha-929592"
	I0914 17:05:34.462219   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:05:34.462500   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.462531   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.475058   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I0914 17:05:34.475673   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.476165   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.476191   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.476576   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.476759   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:34.477824   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0914 17:05:34.478369   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.478505   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:34.479023   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.479047   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.479367   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.479964   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:34.480011   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:34.480339   27433 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:05:34.481456   27433 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:05:34.481468   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 17:05:34.481482   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:34.484719   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.485183   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:34.485211   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.485528   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:34.485758   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:34.485917   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:34.486074   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:34.496123   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0914 17:05:34.496519   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:34.497055   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:34.497093   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:34.497461   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:34.497647   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:05:34.499133   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:05:34.499313   27433 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 17:05:34.499330   27433 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 17:05:34.499348   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:05:34.502134   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.502557   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:05:34.502574   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:05:34.502826   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:05:34.502965   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:05:34.503093   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:05:34.503200   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:05:34.631749   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 17:05:34.652199   27433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:05:34.665217   27433 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 17:05:35.192931   27433 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0914 17:05:35.482652   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.482678   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.482753   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.482773   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.482982   27433 main.go:141] libmachine: (ha-929592) DBG | Closing plugin on server side
	I0914 17:05:35.483014   27433 main.go:141] libmachine: (ha-929592) DBG | Closing plugin on server side
	I0914 17:05:35.483021   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483035   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483040   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483044   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.483048   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483051   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.483056   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.483062   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.483277   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483283   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.483291   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483296   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.483354   27433 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 17:05:35.483369   27433 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 17:05:35.483453   27433 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0914 17:05:35.483459   27433 round_trippers.go:469] Request Headers:
	I0914 17:05:35.483469   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:05:35.483475   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:05:35.500271   27433 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0914 17:05:35.501063   27433 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0914 17:05:35.501088   27433 round_trippers.go:469] Request Headers:
	I0914 17:05:35.501100   27433 round_trippers.go:473]     Content-Type: application/json
	I0914 17:05:35.501106   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:05:35.501110   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:05:35.503856   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:05:35.504029   27433 main.go:141] libmachine: Making call to close driver server
	I0914 17:05:35.504042   27433 main.go:141] libmachine: (ha-929592) Calling .Close
	I0914 17:05:35.504335   27433 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:05:35.504354   27433 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:05:35.506137   27433 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 17:05:35.507243   27433 addons.go:510] duration metric: took 1.066342353s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0914 17:05:35.507275   27433 start.go:246] waiting for cluster config update ...
	I0914 17:05:35.507290   27433 start.go:255] writing updated cluster config ...
	I0914 17:05:35.508881   27433 out.go:201] 
	I0914 17:05:35.510437   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:05:35.510514   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:05:35.511986   27433 out.go:177] * Starting "ha-929592-m02" control-plane node in "ha-929592" cluster
	I0914 17:05:35.513065   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:05:35.513082   27433 cache.go:56] Caching tarball of preloaded images
	I0914 17:05:35.513171   27433 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:05:35.513187   27433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:05:35.513256   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:05:35.513422   27433 start.go:360] acquireMachinesLock for ha-929592-m02: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:05:35.513465   27433 start.go:364] duration metric: took 25.163µs to acquireMachinesLock for "ha-929592-m02"
	I0914 17:05:35.513486   27433 start.go:93] Provisioning new machine with config: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:05:35.513547   27433 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0914 17:05:35.515605   27433 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:05:35.515683   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:05:35.515725   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:05:35.530477   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0914 17:05:35.530959   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:05:35.531458   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:05:35.531487   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:05:35.531834   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:05:35.532065   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:05:35.532193   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:05:35.532395   27433 start.go:159] libmachine.API.Create for "ha-929592" (driver="kvm2")
	I0914 17:05:35.532430   27433 client.go:168] LocalClient.Create starting
	I0914 17:05:35.532464   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:05:35.532508   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:05:35.532527   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:05:35.532592   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:05:35.532623   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:05:35.532638   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:05:35.532664   27433 main.go:141] libmachine: Running pre-create checks...
	I0914 17:05:35.532676   27433 main.go:141] libmachine: (ha-929592-m02) Calling .PreCreateCheck
	I0914 17:05:35.532839   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetConfigRaw
	I0914 17:05:35.533284   27433 main.go:141] libmachine: Creating machine...
	I0914 17:05:35.533303   27433 main.go:141] libmachine: (ha-929592-m02) Calling .Create
	I0914 17:05:35.533445   27433 main.go:141] libmachine: (ha-929592-m02) Creating KVM machine...
	I0914 17:05:35.534813   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found existing default KVM network
	I0914 17:05:35.534987   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found existing private KVM network mk-ha-929592
	I0914 17:05:35.535101   27433 main.go:141] libmachine: (ha-929592-m02) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02 ...
	I0914 17:05:35.535124   27433 main.go:141] libmachine: (ha-929592-m02) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:05:35.535202   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.535089   27764 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:05:35.535308   27433 main.go:141] libmachine: (ha-929592-m02) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:05:35.773131   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.772998   27764 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa...
	I0914 17:05:35.915180   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.915050   27764 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/ha-929592-m02.rawdisk...
	I0914 17:05:35.915215   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Writing magic tar header
	I0914 17:05:35.915230   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Writing SSH key tar header
	I0914 17:05:35.915247   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:35.915202   27764 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02 ...
	I0914 17:05:35.915330   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02
	I0914 17:05:35.915359   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02 (perms=drwx------)
	I0914 17:05:35.915376   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:05:35.915391   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:05:35.915408   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:05:35.915418   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:05:35.915426   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:05:35.915435   27433 main.go:141] libmachine: (ha-929592-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:05:35.915451   27433 main.go:141] libmachine: (ha-929592-m02) Creating domain...
	I0914 17:05:35.915462   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:05:35.915474   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:05:35.915485   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:05:35.915494   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:05:35.915502   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Checking permissions on dir: /home
	I0914 17:05:35.915509   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Skipping /home - not owner
	I0914 17:05:35.916419   27433 main.go:141] libmachine: (ha-929592-m02) define libvirt domain using xml: 
	I0914 17:05:35.916437   27433 main.go:141] libmachine: (ha-929592-m02) <domain type='kvm'>
	I0914 17:05:35.916445   27433 main.go:141] libmachine: (ha-929592-m02)   <name>ha-929592-m02</name>
	I0914 17:05:35.916452   27433 main.go:141] libmachine: (ha-929592-m02)   <memory unit='MiB'>2200</memory>
	I0914 17:05:35.916460   27433 main.go:141] libmachine: (ha-929592-m02)   <vcpu>2</vcpu>
	I0914 17:05:35.916473   27433 main.go:141] libmachine: (ha-929592-m02)   <features>
	I0914 17:05:35.916483   27433 main.go:141] libmachine: (ha-929592-m02)     <acpi/>
	I0914 17:05:35.916494   27433 main.go:141] libmachine: (ha-929592-m02)     <apic/>
	I0914 17:05:35.916502   27433 main.go:141] libmachine: (ha-929592-m02)     <pae/>
	I0914 17:05:35.916510   27433 main.go:141] libmachine: (ha-929592-m02)     
	I0914 17:05:35.916518   27433 main.go:141] libmachine: (ha-929592-m02)   </features>
	I0914 17:05:35.916524   27433 main.go:141] libmachine: (ha-929592-m02)   <cpu mode='host-passthrough'>
	I0914 17:05:35.916529   27433 main.go:141] libmachine: (ha-929592-m02)   
	I0914 17:05:35.916536   27433 main.go:141] libmachine: (ha-929592-m02)   </cpu>
	I0914 17:05:35.916543   27433 main.go:141] libmachine: (ha-929592-m02)   <os>
	I0914 17:05:35.916550   27433 main.go:141] libmachine: (ha-929592-m02)     <type>hvm</type>
	I0914 17:05:35.916558   27433 main.go:141] libmachine: (ha-929592-m02)     <boot dev='cdrom'/>
	I0914 17:05:35.916567   27433 main.go:141] libmachine: (ha-929592-m02)     <boot dev='hd'/>
	I0914 17:05:35.916584   27433 main.go:141] libmachine: (ha-929592-m02)     <bootmenu enable='no'/>
	I0914 17:05:35.916596   27433 main.go:141] libmachine: (ha-929592-m02)   </os>
	I0914 17:05:35.916604   27433 main.go:141] libmachine: (ha-929592-m02)   <devices>
	I0914 17:05:35.916609   27433 main.go:141] libmachine: (ha-929592-m02)     <disk type='file' device='cdrom'>
	I0914 17:05:35.916617   27433 main.go:141] libmachine: (ha-929592-m02)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/boot2docker.iso'/>
	I0914 17:05:35.916626   27433 main.go:141] libmachine: (ha-929592-m02)       <target dev='hdc' bus='scsi'/>
	I0914 17:05:35.916631   27433 main.go:141] libmachine: (ha-929592-m02)       <readonly/>
	I0914 17:05:35.916635   27433 main.go:141] libmachine: (ha-929592-m02)     </disk>
	I0914 17:05:35.916640   27433 main.go:141] libmachine: (ha-929592-m02)     <disk type='file' device='disk'>
	I0914 17:05:35.916645   27433 main.go:141] libmachine: (ha-929592-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:05:35.916652   27433 main.go:141] libmachine: (ha-929592-m02)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/ha-929592-m02.rawdisk'/>
	I0914 17:05:35.916657   27433 main.go:141] libmachine: (ha-929592-m02)       <target dev='hda' bus='virtio'/>
	I0914 17:05:35.916661   27433 main.go:141] libmachine: (ha-929592-m02)     </disk>
	I0914 17:05:35.916666   27433 main.go:141] libmachine: (ha-929592-m02)     <interface type='network'>
	I0914 17:05:35.916671   27433 main.go:141] libmachine: (ha-929592-m02)       <source network='mk-ha-929592'/>
	I0914 17:05:35.916676   27433 main.go:141] libmachine: (ha-929592-m02)       <model type='virtio'/>
	I0914 17:05:35.916680   27433 main.go:141] libmachine: (ha-929592-m02)     </interface>
	I0914 17:05:35.916688   27433 main.go:141] libmachine: (ha-929592-m02)     <interface type='network'>
	I0914 17:05:35.916728   27433 main.go:141] libmachine: (ha-929592-m02)       <source network='default'/>
	I0914 17:05:35.916754   27433 main.go:141] libmachine: (ha-929592-m02)       <model type='virtio'/>
	I0914 17:05:35.916768   27433 main.go:141] libmachine: (ha-929592-m02)     </interface>
	I0914 17:05:35.916779   27433 main.go:141] libmachine: (ha-929592-m02)     <serial type='pty'>
	I0914 17:05:35.916793   27433 main.go:141] libmachine: (ha-929592-m02)       <target port='0'/>
	I0914 17:05:35.916803   27433 main.go:141] libmachine: (ha-929592-m02)     </serial>
	I0914 17:05:35.916814   27433 main.go:141] libmachine: (ha-929592-m02)     <console type='pty'>
	I0914 17:05:35.916825   27433 main.go:141] libmachine: (ha-929592-m02)       <target type='serial' port='0'/>
	I0914 17:05:35.916833   27433 main.go:141] libmachine: (ha-929592-m02)     </console>
	I0914 17:05:35.916846   27433 main.go:141] libmachine: (ha-929592-m02)     <rng model='virtio'>
	I0914 17:05:35.916858   27433 main.go:141] libmachine: (ha-929592-m02)       <backend model='random'>/dev/random</backend>
	I0914 17:05:35.916869   27433 main.go:141] libmachine: (ha-929592-m02)     </rng>
	I0914 17:05:35.916878   27433 main.go:141] libmachine: (ha-929592-m02)     
	I0914 17:05:35.916887   27433 main.go:141] libmachine: (ha-929592-m02)     
	I0914 17:05:35.916897   27433 main.go:141] libmachine: (ha-929592-m02)   </devices>
	I0914 17:05:35.916908   27433 main.go:141] libmachine: (ha-929592-m02) </domain>
	I0914 17:05:35.916921   27433 main.go:141] libmachine: (ha-929592-m02) 
	I0914 17:05:35.923775   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:f0:50:13 in network default
	I0914 17:05:35.924413   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:35.924431   27433 main.go:141] libmachine: (ha-929592-m02) Ensuring networks are active...
	I0914 17:05:35.925240   27433 main.go:141] libmachine: (ha-929592-m02) Ensuring network default is active
	I0914 17:05:35.925508   27433 main.go:141] libmachine: (ha-929592-m02) Ensuring network mk-ha-929592 is active
	I0914 17:05:35.925994   27433 main.go:141] libmachine: (ha-929592-m02) Getting domain xml...
	I0914 17:05:35.926731   27433 main.go:141] libmachine: (ha-929592-m02) Creating domain...
	I0914 17:05:37.161131   27433 main.go:141] libmachine: (ha-929592-m02) Waiting to get IP...
	I0914 17:05:37.161868   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:37.162235   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:37.162266   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:37.162221   27764 retry.go:31] will retry after 210.008934ms: waiting for machine to come up
	I0914 17:05:37.373575   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:37.374028   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:37.374056   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:37.373981   27764 retry.go:31] will retry after 387.717032ms: waiting for machine to come up
	I0914 17:05:37.763659   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:37.764117   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:37.764155   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:37.764041   27764 retry.go:31] will retry after 296.557307ms: waiting for machine to come up
	I0914 17:05:38.063231   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:38.063653   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:38.063682   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:38.063596   27764 retry.go:31] will retry after 575.323007ms: waiting for machine to come up
	I0914 17:05:38.640355   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:38.640798   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:38.640836   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:38.640752   27764 retry.go:31] will retry after 534.390905ms: waiting for machine to come up
	I0914 17:05:39.176461   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:39.176910   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:39.176993   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:39.176864   27764 retry.go:31] will retry after 701.303758ms: waiting for machine to come up
	I0914 17:05:39.879456   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:39.879939   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:39.879964   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:39.879880   27764 retry.go:31] will retry after 1.123994818s: waiting for machine to come up
	I0914 17:05:41.005662   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:41.005979   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:41.006009   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:41.005931   27764 retry.go:31] will retry after 1.069436048s: waiting for machine to come up
	I0914 17:05:42.077062   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:42.077364   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:42.077410   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:42.077345   27764 retry.go:31] will retry after 1.46285432s: waiting for machine to come up
	I0914 17:05:43.541612   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:43.542119   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:43.542142   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:43.542096   27764 retry.go:31] will retry after 2.129066139s: waiting for machine to come up
	I0914 17:05:45.672329   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:45.672756   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:45.672787   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:45.672709   27764 retry.go:31] will retry after 2.11667218s: waiting for machine to come up
	I0914 17:05:47.791959   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:47.792398   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:47.792421   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:47.792360   27764 retry.go:31] will retry after 3.267136095s: waiting for machine to come up
	I0914 17:05:51.061117   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:51.061619   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:51.061653   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:51.061567   27764 retry.go:31] will retry after 3.623977804s: waiting for machine to come up
	I0914 17:05:54.688326   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:54.688750   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find current IP address of domain ha-929592-m02 in network mk-ha-929592
	I0914 17:05:54.688779   27433 main.go:141] libmachine: (ha-929592-m02) DBG | I0914 17:05:54.688708   27764 retry.go:31] will retry after 4.926570221s: waiting for machine to come up
	I0914 17:05:59.619920   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.620387   27433 main.go:141] libmachine: (ha-929592-m02) Found IP for machine: 192.168.39.148
	I0914 17:05:59.620415   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has current primary IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.620428   27433 main.go:141] libmachine: (ha-929592-m02) Reserving static IP address...
	I0914 17:05:59.620759   27433 main.go:141] libmachine: (ha-929592-m02) DBG | unable to find host DHCP lease matching {name: "ha-929592-m02", mac: "52:54:00:23:9e:43", ip: "192.168.39.148"} in network mk-ha-929592
	I0914 17:05:59.692746   27433 main.go:141] libmachine: (ha-929592-m02) Reserved static IP address: 192.168.39.148
	I0914 17:05:59.692768   27433 main.go:141] libmachine: (ha-929592-m02) Waiting for SSH to be available...
	I0914 17:05:59.692778   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Getting to WaitForSSH function...
	I0914 17:05:59.695628   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.696183   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:9e:43}
	I0914 17:05:59.696213   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.696414   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Using SSH client type: external
	I0914 17:05:59.696512   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa (-rw-------)
	I0914 17:05:59.696582   27433 main.go:141] libmachine: (ha-929592-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:05:59.696602   27433 main.go:141] libmachine: (ha-929592-m02) DBG | About to run SSH command:
	I0914 17:05:59.696614   27433 main.go:141] libmachine: (ha-929592-m02) DBG | exit 0
	I0914 17:05:59.822260   27433 main.go:141] libmachine: (ha-929592-m02) DBG | SSH cmd err, output: <nil>: 
	I0914 17:05:59.822527   27433 main.go:141] libmachine: (ha-929592-m02) KVM machine creation complete!
	I0914 17:05:59.822904   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetConfigRaw
	I0914 17:05:59.823568   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:05:59.823762   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:05:59.823958   27433 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:05:59.823973   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:05:59.825060   27433 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:05:59.825083   27433 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:05:59.825094   27433 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:05:59.825104   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:05:59.827539   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.827896   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:05:59.827924   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.828060   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:05:59.828191   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.828313   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.828438   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:05:59.828607   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:59.828944   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:05:59.828962   27433 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:05:59.937315   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:05:59.937336   27433 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:05:59.937345   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:05:59.940018   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.940354   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:05:59.940376   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:05:59.940584   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:05:59.940793   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.940946   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:05:59.941095   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:05:59.941291   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:05:59.941455   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:05:59.941466   27433 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:06:00.051065   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:06:00.051190   27433 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:06:00.051205   27433 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:06:00.051218   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:06:00.051471   27433 buildroot.go:166] provisioning hostname "ha-929592-m02"
	I0914 17:06:00.051503   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:06:00.051704   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.054191   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.054504   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.054531   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.054677   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.054869   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.055049   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.055206   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.055386   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:00.055566   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:00.055579   27433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592-m02 && echo "ha-929592-m02" | sudo tee /etc/hostname
	I0914 17:06:00.175884   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592-m02
	
	I0914 17:06:00.175913   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.178888   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.179268   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.179305   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.179468   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.179633   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.179780   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.179900   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.180070   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:00.180271   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:00.180288   27433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:06:00.295528   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:06:00.295570   27433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:06:00.295592   27433 buildroot.go:174] setting up certificates
	I0914 17:06:00.295605   27433 provision.go:84] configureAuth start
	I0914 17:06:00.295614   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetMachineName
	I0914 17:06:00.295987   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:00.299234   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.299663   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.299696   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.299841   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.302288   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.302662   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.302693   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.302864   27433 provision.go:143] copyHostCerts
	I0914 17:06:00.302911   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:06:00.302950   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:06:00.302961   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:06:00.303093   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:06:00.303183   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:06:00.303209   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:06:00.303217   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:06:00.303242   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:06:00.303288   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:06:00.303306   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:06:00.303311   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:06:00.303332   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:06:00.303383   27433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592-m02 san=[127.0.0.1 192.168.39.148 ha-929592-m02 localhost minikube]
	I0914 17:06:00.538356   27433 provision.go:177] copyRemoteCerts
	I0914 17:06:00.538412   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:06:00.538434   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.540910   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.541329   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.541350   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.541555   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.541741   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.541914   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.542066   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:00.623831   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:06:00.623907   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:06:00.647803   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:06:00.647883   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 17:06:00.671875   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:06:00.671937   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:06:00.696316   27433 provision.go:87] duration metric: took 400.698997ms to configureAuth
	I0914 17:06:00.696347   27433 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:06:00.696612   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:00.696747   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.699617   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.699975   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.700001   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.700178   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.700332   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.700583   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.700744   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.700901   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:00.701096   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:00.701110   27433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:06:00.927452   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:06:00.927475   27433 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:06:00.927492   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetURL
	I0914 17:06:00.928693   27433 main.go:141] libmachine: (ha-929592-m02) DBG | Using libvirt version 6000000
	I0914 17:06:00.931091   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.931467   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.931495   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.931675   27433 main.go:141] libmachine: Docker is up and running!
	I0914 17:06:00.931693   27433 main.go:141] libmachine: Reticulating splines...
	I0914 17:06:00.931704   27433 client.go:171] duration metric: took 25.39926256s to LocalClient.Create
	I0914 17:06:00.931728   27433 start.go:167] duration metric: took 25.399335014s to libmachine.API.Create "ha-929592"
	I0914 17:06:00.931739   27433 start.go:293] postStartSetup for "ha-929592-m02" (driver="kvm2")
	I0914 17:06:00.931753   27433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:06:00.931771   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:00.932001   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:06:00.932038   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:00.934290   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.934650   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:00.934671   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:00.934788   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:00.934945   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:00.935073   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:00.935173   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:01.020366   27433 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:06:01.024445   27433 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:06:01.024474   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:06:01.024535   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:06:01.024612   27433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:06:01.024621   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:06:01.024697   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:06:01.033524   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:06:01.055501   27433 start.go:296] duration metric: took 123.750654ms for postStartSetup
	I0914 17:06:01.055544   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetConfigRaw
	I0914 17:06:01.056168   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:01.058924   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.059289   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.059318   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.059556   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:06:01.059787   27433 start.go:128] duration metric: took 25.546229359s to createHost
	I0914 17:06:01.059820   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:01.062065   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.062470   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.062490   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.062604   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:01.062769   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.062908   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.063007   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:01.063136   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:06:01.063334   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0914 17:06:01.063346   27433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:06:01.170835   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726333561.132255588
	
	I0914 17:06:01.170865   27433 fix.go:216] guest clock: 1726333561.132255588
	I0914 17:06:01.170883   27433 fix.go:229] Guest: 2024-09-14 17:06:01.132255588 +0000 UTC Remote: 2024-09-14 17:06:01.059806988 +0000 UTC m=+68.731004663 (delta=72.4486ms)
	I0914 17:06:01.170908   27433 fix.go:200] guest clock delta is within tolerance: 72.4486ms
	I0914 17:06:01.170915   27433 start.go:83] releasing machines lock for "ha-929592-m02", held for 25.65743831s
	I0914 17:06:01.170947   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.171190   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:01.173690   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.174044   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.174086   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.176439   27433 out.go:177] * Found network options:
	I0914 17:06:01.177882   27433 out.go:177]   - NO_PROXY=192.168.39.54
	W0914 17:06:01.178995   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:06:01.179041   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.179577   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.179750   27433 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:06:01.179818   27433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:06:01.179854   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	W0914 17:06:01.179902   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:06:01.179998   27433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:06:01.180020   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:06:01.182388   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.182620   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.182761   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.182784   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.182922   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:01.183042   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:01.183065   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:01.183100   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.183219   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:06:01.183286   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:01.183439   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:06:01.183446   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:01.183586   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:06:01.183708   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:06:01.424976   27433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:06:01.430825   27433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:06:01.430885   27433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:06:01.445943   27433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:06:01.445965   27433 start.go:495] detecting cgroup driver to use...
	I0914 17:06:01.446044   27433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:06:01.465516   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:06:01.481232   27433 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:06:01.481292   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:06:01.496727   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:06:01.510206   27433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:06:01.626699   27433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:06:01.778807   27433 docker.go:233] disabling docker service ...
	I0914 17:06:01.778872   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:06:01.792872   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:06:01.805145   27433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:06:01.954030   27433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:06:02.076503   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:06:02.090192   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:06:02.108104   27433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:06:02.108165   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.118586   27433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:06:02.118659   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.129037   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.139271   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.149307   27433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:06:02.160226   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.170053   27433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.186445   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:06:02.196545   27433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:06:02.205667   27433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:06:02.205727   27433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:06:02.218845   27433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:06:02.228051   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:06:02.335821   27433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:06:02.426353   27433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:06:02.426415   27433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:06:02.430922   27433 start.go:563] Will wait 60s for crictl version
	I0914 17:06:02.430986   27433 ssh_runner.go:195] Run: which crictl
	I0914 17:06:02.434438   27433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:06:02.473078   27433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:06:02.473163   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:06:02.505224   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:06:02.534429   27433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:06:02.535775   27433 out.go:177]   - env NO_PROXY=192.168.39.54
	I0914 17:06:02.536938   27433 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:06:02.539641   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:02.539999   27433 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:49 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:06:02.540031   27433 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:06:02.540212   27433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:06:02.544021   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:06:02.556167   27433 mustload.go:65] Loading cluster: ha-929592
	I0914 17:06:02.556379   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:02.556641   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:02.556680   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:02.573001   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0914 17:06:02.573569   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:02.574085   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:02.574117   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:02.574551   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:02.574748   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:06:02.576363   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:06:02.576647   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:02.576690   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:02.591896   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0914 17:06:02.592362   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:02.592910   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:02.592930   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:02.593281   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:02.593447   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:06:02.593604   27433 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.148
	I0914 17:06:02.593619   27433 certs.go:194] generating shared ca certs ...
	I0914 17:06:02.593645   27433 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:06:02.593773   27433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:06:02.593810   27433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:06:02.593821   27433 certs.go:256] generating profile certs ...
	I0914 17:06:02.593889   27433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:06:02.593911   27433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9
	I0914 17:06:02.593924   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.148 192.168.39.254]
	I0914 17:06:02.674183   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9 ...
	I0914 17:06:02.674215   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9: {Name:mk7b0abf9bde6718910e40cf89b039fc62438027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:06:02.674380   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9 ...
	I0914 17:06:02.674392   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9: {Name:mkf46cb15e9565b29650076ca2280885cae50778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:06:02.674460   27433 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.a7b427e9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:06:02.674597   27433 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.a7b427e9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:06:02.674719   27433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:06:02.674735   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:06:02.674748   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:06:02.674762   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:06:02.674774   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:06:02.674787   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:06:02.674800   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:06:02.674811   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:06:02.674823   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:06:02.674877   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:06:02.674904   27433 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:06:02.674915   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:06:02.674942   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:06:02.674964   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:06:02.674984   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:06:02.675019   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:06:02.675052   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:06:02.675066   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:02.675078   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:06:02.675106   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:06:02.678197   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:02.678611   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:06:02.678637   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:02.678799   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:06:02.678987   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:06:02.679150   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:06:02.679293   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:06:02.754596   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0914 17:06:02.759290   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0914 17:06:02.769849   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0914 17:06:02.774219   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0914 17:06:02.784759   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0914 17:06:02.788750   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0914 17:06:02.799025   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0914 17:06:02.802760   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0914 17:06:02.812026   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0914 17:06:02.815883   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0914 17:06:02.825239   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0914 17:06:02.828987   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0914 17:06:02.839073   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:06:02.862561   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:06:02.885092   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:06:02.907879   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:06:02.931262   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0914 17:06:02.953838   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:06:02.977311   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:06:03.000261   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:06:03.022914   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:06:03.045556   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:06:03.072140   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:06:03.097354   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0914 17:06:03.113627   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0914 17:06:03.129914   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0914 17:06:03.145634   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0914 17:06:03.161520   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0914 17:06:03.177503   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0914 17:06:03.193586   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0914 17:06:03.210279   27433 ssh_runner.go:195] Run: openssl version
	I0914 17:06:03.215862   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:06:03.226494   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:06:03.230749   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:06:03.230811   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:06:03.236532   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:06:03.247348   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:06:03.258810   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:06:03.263294   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:06:03.263368   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:06:03.268900   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:06:03.279654   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:06:03.289942   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:03.294193   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:03.294243   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:06:03.299592   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:06:03.309907   27433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:06:03.314010   27433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:06:03.314056   27433 kubeadm.go:934] updating node {m02 192.168.39.148 8443 v1.31.1 crio true true} ...
	I0914 17:06:03.314182   27433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:06:03.314209   27433 kube-vip.go:115] generating kube-vip config ...
	I0914 17:06:03.314241   27433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:06:03.332773   27433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:06:03.332844   27433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:06:03.332892   27433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:06:03.346197   27433 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0914 17:06:03.346254   27433 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0914 17:06:03.361915   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0914 17:06:03.361949   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:06:03.362005   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:06:03.362034   27433 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0914 17:06:03.362057   27433 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0914 17:06:03.366263   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0914 17:06:03.366294   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0914 17:06:04.306352   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:06:04.306428   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:06:04.310986   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0914 17:06:04.311021   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0914 17:06:04.437086   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:06:04.472561   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:06:04.472652   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:06:04.481645   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0914 17:06:04.481689   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0914 17:06:04.894100   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0914 17:06:04.906172   27433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0914 17:06:04.923934   27433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:06:04.943429   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 17:06:04.960902   27433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:06:04.965096   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:06:04.977142   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:06:05.100919   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:06:05.118791   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:06:05.119235   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:05.119291   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:05.134754   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42231
	I0914 17:06:05.135388   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:05.135932   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:05.135953   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:05.136295   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:05.136514   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:06:05.136651   27433 start.go:317] joinCluster: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:06:05.136779   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 17:06:05.136798   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:06:05.140027   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:05.140431   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:06:05.140456   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:06:05.140610   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:06:05.140777   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:06:05.140973   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:06:05.141108   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:06:05.305267   27433 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:06:05.305343   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 69bgkx.t8gcp42bom698swe --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m02 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443"
	I0914 17:06:27.237304   27433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 69bgkx.t8gcp42bom698swe --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m02 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443": (21.931933299s)
	I0914 17:06:27.237345   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 17:06:27.810007   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-929592-m02 minikube.k8s.io/updated_at=2024_09_14T17_06_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=ha-929592 minikube.k8s.io/primary=false
	I0914 17:06:27.964976   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-929592-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0914 17:06:28.142890   27433 start.go:319] duration metric: took 23.006235295s to joinCluster
	I0914 17:06:28.142975   27433 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:06:28.143287   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:28.144710   27433 out.go:177] * Verifying Kubernetes components...
	I0914 17:06:28.145892   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:06:28.400701   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:06:28.443879   27433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:06:28.444188   27433 kapi.go:59] client config for ha-929592: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0914 17:06:28.444306   27433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0914 17:06:28.444625   27433 node_ready.go:35] waiting up to 6m0s for node "ha-929592-m02" to be "Ready" ...
	I0914 17:06:28.444789   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:28.444800   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:28.444813   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:28.444822   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:28.454874   27433 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0914 17:06:28.945857   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:28.945881   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:28.945889   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:28.945894   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:28.950053   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:29.444967   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:29.444987   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:29.444995   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:29.445000   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:29.448785   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:29.945744   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:29.945767   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:29.945774   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:29.945778   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:29.949007   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:30.445350   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:30.445391   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:30.445400   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:30.445405   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:30.448516   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:30.449150   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:30.944823   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:30.944842   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:30.944852   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:30.944856   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:30.948489   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:31.445403   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:31.445423   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:31.445430   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:31.445434   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:31.450120   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:31.945219   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:31.945252   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:31.945263   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:31.945269   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:31.948193   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:32.445454   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:32.445474   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:32.445485   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:32.445489   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:32.448956   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:32.449653   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:32.945507   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:32.945528   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:32.945536   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:32.945539   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:32.948974   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:33.445218   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:33.445259   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:33.445266   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:33.445270   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:33.448638   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:33.945669   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:33.945690   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:33.945699   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:33.945702   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:33.949250   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:34.445298   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:34.445336   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:34.445344   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:34.445349   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:34.448841   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:34.945131   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:34.945155   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:34.945163   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:34.945169   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:34.948811   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:34.949307   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:35.445126   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:35.445155   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:35.445167   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:35.445173   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:35.448787   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:35.945782   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:35.945808   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:35.945816   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:35.945820   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:35.949787   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:36.445729   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:36.445754   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:36.445762   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:36.445770   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:36.449051   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:36.945857   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:36.945889   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:36.945898   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:36.945902   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:36.949623   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:36.950179   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:37.445701   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:37.445724   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:37.445733   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:37.445737   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:37.449415   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:37.945822   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:37.945843   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:37.945851   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:37.945855   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:37.949294   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:38.445253   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:38.445277   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:38.445286   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:38.445292   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:38.448999   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:38.945059   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:38.945082   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:38.945090   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:38.945095   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:38.948829   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:39.444999   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:39.445021   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:39.445029   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:39.445033   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:39.448760   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:39.449370   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:39.945847   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:39.945871   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:39.945879   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:39.945883   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:39.949527   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:40.444905   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:40.444928   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:40.444935   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:40.444938   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:40.448294   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:40.945759   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:40.945782   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:40.945789   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:40.945794   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:40.949593   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:41.445825   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:41.445854   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:41.445865   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:41.445871   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:41.449510   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:41.449939   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:41.945333   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:41.945357   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:41.945369   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:41.945376   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:41.948965   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:42.445259   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:42.445281   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:42.445296   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:42.445300   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:42.448678   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:42.945096   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:42.945118   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:42.945126   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:42.945130   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:42.948381   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:43.445351   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:43.445373   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:43.445382   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:43.445385   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:43.449853   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:43.450410   27433 node_ready.go:53] node "ha-929592-m02" has status "Ready":"False"
	I0914 17:06:43.944892   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:43.944915   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:43.944923   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:43.944927   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:43.948315   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:44.445368   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:44.445392   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:44.445400   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:44.445404   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:44.448455   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:44.945534   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:44.945557   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:44.945565   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:44.945569   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:44.949438   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.445324   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:45.445348   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.445356   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.445360   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.448989   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.945405   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:45.945431   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.945443   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.945453   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.952479   27433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 17:06:45.953028   27433 node_ready.go:49] node "ha-929592-m02" has status "Ready":"True"
	I0914 17:06:45.953060   27433 node_ready.go:38] duration metric: took 17.508397098s for node "ha-929592-m02" to be "Ready" ...
	I0914 17:06:45.953073   27433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:06:45.953195   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:45.953210   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.953222   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.953229   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.959166   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:06:45.966388   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.966505   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-66txm
	I0914 17:06:45.966516   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.966527   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.966534   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.970133   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.970846   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:45.970863   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.970871   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.970875   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.974296   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.974856   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.974879   27433 pod_ready.go:82] duration metric: took 8.463909ms for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.974890   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.974954   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-dpdz4
	I0914 17:06:45.974961   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.974969   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.974974   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.978204   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.978916   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:45.978937   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.978945   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.978949   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.982392   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.982929   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.982957   27433 pod_ready.go:82] duration metric: took 8.060115ms for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.982975   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.983054   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592
	I0914 17:06:45.983066   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.983076   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.983085   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.985873   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:45.986599   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:45.986616   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.986624   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.986627   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.989772   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:45.990261   27433 pod_ready.go:93] pod "etcd-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.990277   27433 pod_ready.go:82] duration metric: took 7.295414ms for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.990290   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.990343   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m02
	I0914 17:06:45.990350   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.990365   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.990372   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.993331   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:45.993937   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:45.993954   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:45.993962   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:45.993966   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:45.996680   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:06:45.997261   27433 pod_ready.go:93] pod "etcd-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:45.997278   27433 pod_ready.go:82] duration metric: took 6.982458ms for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:45.997291   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.145678   27433 request.go:632] Waited for 148.305068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:06:46.145735   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:06:46.145740   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.145747   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.145751   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.149090   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.346002   27433 request.go:632] Waited for 196.36158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:46.346068   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:46.346074   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.346081   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.346086   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.349259   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.349868   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:46.349892   27433 pod_ready.go:82] duration metric: took 352.59431ms for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.349905   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.545900   27433 request.go:632] Waited for 195.922909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:06:46.545976   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:06:46.545984   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.545991   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.545997   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.549133   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.746357   27433 request.go:632] Waited for 196.373892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:46.746413   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:46.746431   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.746439   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.746445   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.749770   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:46.750286   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:46.750330   27433 pod_ready.go:82] duration metric: took 400.417297ms for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.750343   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:46.946421   27433 request.go:632] Waited for 196.010926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:06:46.946536   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:06:46.946547   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:46.946558   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:46.946564   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:46.950460   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.146420   27433 request.go:632] Waited for 195.341813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.146484   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.146508   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.146521   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.146532   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.150451   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.150991   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:47.151009   27433 pod_ready.go:82] duration metric: took 400.660338ms for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.151018   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.346097   27433 request.go:632] Waited for 195.00805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:06:47.346151   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:06:47.346177   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.346188   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.346213   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.350098   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.546350   27433 request.go:632] Waited for 195.435197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:47.546414   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:47.546421   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.546430   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.546434   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.550244   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.550787   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:47.550809   27433 pod_ready.go:82] duration metric: took 399.783639ms for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.550822   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.745770   27433 request.go:632] Waited for 194.872367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:06:47.745867   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:06:47.745875   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.745886   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.745894   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.751396   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:06:47.946402   27433 request.go:632] Waited for 194.394241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.946466   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:47.946474   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:47.946483   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:47.946489   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:47.950180   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:47.950824   27433 pod_ready.go:93] pod "kube-proxy-6zqmd" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:47.950847   27433 pod_ready.go:82] duration metric: took 400.017562ms for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:47.950862   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.145816   27433 request.go:632] Waited for 194.86879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:06:48.145884   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:06:48.145892   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.145902   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.145909   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.149564   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:48.345823   27433 request.go:632] Waited for 195.354267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:48.345906   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:48.345915   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.345926   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.345934   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.349290   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:48.349859   27433 pod_ready.go:93] pod "kube-proxy-bcfkb" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:48.349882   27433 pod_ready.go:82] duration metric: took 399.010862ms for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.349895   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.545948   27433 request.go:632] Waited for 195.969543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:06:48.546065   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:06:48.546078   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.546096   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.546105   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.550543   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:48.745476   27433 request.go:632] Waited for 194.30038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:48.745563   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:06:48.745572   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.745587   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.745597   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.748682   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:48.749284   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:48.749319   27433 pod_ready.go:82] duration metric: took 399.412284ms for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.749333   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:48.946336   27433 request.go:632] Waited for 196.916046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:06:48.946388   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:06:48.946393   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:48.946401   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:48.946406   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:48.950272   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:49.146231   27433 request.go:632] Waited for 195.356604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:49.146295   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:06:49.146302   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.146313   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.146318   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.149605   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:49.150177   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:06:49.150197   27433 pod_ready.go:82] duration metric: took 400.852186ms for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:06:49.150210   27433 pod_ready.go:39] duration metric: took 3.197122081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:06:49.150234   27433 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:06:49.150301   27433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:06:49.168129   27433 api_server.go:72] duration metric: took 21.025118313s to wait for apiserver process to appear ...
	I0914 17:06:49.168155   27433 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:06:49.168188   27433 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0914 17:06:49.174137   27433 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0914 17:06:49.174234   27433 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0914 17:06:49.174243   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.174251   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.174256   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.175044   27433 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 17:06:49.175141   27433 api_server.go:141] control plane version: v1.31.1
	I0914 17:06:49.175162   27433 api_server.go:131] duration metric: took 6.99529ms to wait for apiserver health ...
	I0914 17:06:49.175174   27433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:06:49.345500   27433 request.go:632] Waited for 170.24343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.345594   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.345606   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.345618   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.345627   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.350636   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:49.356629   27433 system_pods.go:59] 17 kube-system pods found
	I0914 17:06:49.356665   27433 system_pods.go:61] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:06:49.356671   27433 system_pods.go:61] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:06:49.356675   27433 system_pods.go:61] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:06:49.356678   27433 system_pods.go:61] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:06:49.356682   27433 system_pods.go:61] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:06:49.356686   27433 system_pods.go:61] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:06:49.356689   27433 system_pods.go:61] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:06:49.356693   27433 system_pods.go:61] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:06:49.356696   27433 system_pods.go:61] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:06:49.356699   27433 system_pods.go:61] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:06:49.356702   27433 system_pods.go:61] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:06:49.356705   27433 system_pods.go:61] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:06:49.356709   27433 system_pods.go:61] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:06:49.356711   27433 system_pods.go:61] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:06:49.356714   27433 system_pods.go:61] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:06:49.356717   27433 system_pods.go:61] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:06:49.356720   27433 system_pods.go:61] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:06:49.356725   27433 system_pods.go:74] duration metric: took 181.542581ms to wait for pod list to return data ...
	I0914 17:06:49.356734   27433 default_sa.go:34] waiting for default service account to be created ...
	I0914 17:06:49.546151   27433 request.go:632] Waited for 189.322413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:06:49.546248   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:06:49.546257   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.546271   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.546282   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.549850   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:06:49.550069   27433 default_sa.go:45] found service account: "default"
	I0914 17:06:49.550087   27433 default_sa.go:55] duration metric: took 193.346862ms for default service account to be created ...
	I0914 17:06:49.550098   27433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 17:06:49.745487   27433 request.go:632] Waited for 195.316949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.745564   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:06:49.745570   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.745577   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.745582   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.750700   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:06:49.755500   27433 system_pods.go:86] 17 kube-system pods found
	I0914 17:06:49.755544   27433 system_pods.go:89] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:06:49.755553   27433 system_pods.go:89] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:06:49.755560   27433 system_pods.go:89] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:06:49.755565   27433 system_pods.go:89] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:06:49.755570   27433 system_pods.go:89] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:06:49.755576   27433 system_pods.go:89] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:06:49.755583   27433 system_pods.go:89] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:06:49.755589   27433 system_pods.go:89] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:06:49.755595   27433 system_pods.go:89] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:06:49.755602   27433 system_pods.go:89] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:06:49.755608   27433 system_pods.go:89] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:06:49.755614   27433 system_pods.go:89] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:06:49.755623   27433 system_pods.go:89] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:06:49.755630   27433 system_pods.go:89] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:06:49.755635   27433 system_pods.go:89] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:06:49.755644   27433 system_pods.go:89] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:06:49.755652   27433 system_pods.go:89] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:06:49.755663   27433 system_pods.go:126] duration metric: took 205.557487ms to wait for k8s-apps to be running ...
	I0914 17:06:49.755684   27433 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 17:06:49.755743   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:06:49.776244   27433 system_svc.go:56] duration metric: took 20.525134ms WaitForService to wait for kubelet
	I0914 17:06:49.776289   27433 kubeadm.go:582] duration metric: took 21.633280125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:06:49.776315   27433 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:06:49.945798   27433 request.go:632] Waited for 169.394423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0914 17:06:49.945879   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0914 17:06:49.945887   27433 round_trippers.go:469] Request Headers:
	I0914 17:06:49.945897   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:06:49.945905   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:06:49.950712   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:06:49.951592   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:06:49.951629   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:06:49.951650   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:06:49.951653   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:06:49.951658   27433 node_conditions.go:105] duration metric: took 175.335321ms to run NodePressure ...
	I0914 17:06:49.951669   27433 start.go:241] waiting for startup goroutines ...
	I0914 17:06:49.951696   27433 start.go:255] writing updated cluster config ...
	I0914 17:06:49.953949   27433 out.go:201] 
	I0914 17:06:49.955877   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:06:49.956002   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:06:49.957813   27433 out.go:177] * Starting "ha-929592-m03" control-plane node in "ha-929592" cluster
	I0914 17:06:49.959068   27433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:06:49.959099   27433 cache.go:56] Caching tarball of preloaded images
	I0914 17:06:49.959215   27433 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:06:49.959228   27433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:06:49.959357   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:06:49.959556   27433 start.go:360] acquireMachinesLock for ha-929592-m03: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:06:49.959616   27433 start.go:364] duration metric: took 37.328µs to acquireMachinesLock for "ha-929592-m03"
	I0914 17:06:49.959640   27433 start.go:93] Provisioning new machine with config: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:06:49.959751   27433 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0914 17:06:49.961439   27433 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:06:49.961570   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:06:49.961615   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:06:49.977719   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0914 17:06:49.978311   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:06:49.978858   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:06:49.978877   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:06:49.979166   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:06:49.979367   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:06:49.979530   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:06:49.979697   27433 start.go:159] libmachine.API.Create for "ha-929592" (driver="kvm2")
	I0914 17:06:49.979724   27433 client.go:168] LocalClient.Create starting
	I0914 17:06:49.979757   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:06:49.979794   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:06:49.979808   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:06:49.979856   27433 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:06:49.979874   27433 main.go:141] libmachine: Decoding PEM data...
	I0914 17:06:49.979897   27433 main.go:141] libmachine: Parsing certificate...
	I0914 17:06:49.979913   27433 main.go:141] libmachine: Running pre-create checks...
	I0914 17:06:49.979920   27433 main.go:141] libmachine: (ha-929592-m03) Calling .PreCreateCheck
	I0914 17:06:49.980055   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetConfigRaw
	I0914 17:06:49.980434   27433 main.go:141] libmachine: Creating machine...
	I0914 17:06:49.980448   27433 main.go:141] libmachine: (ha-929592-m03) Calling .Create
	I0914 17:06:49.980624   27433 main.go:141] libmachine: (ha-929592-m03) Creating KVM machine...
	I0914 17:06:49.982264   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found existing default KVM network
	I0914 17:06:49.982455   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found existing private KVM network mk-ha-929592
	I0914 17:06:49.982685   27433 main.go:141] libmachine: (ha-929592-m03) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03 ...
	I0914 17:06:49.982713   27433 main.go:141] libmachine: (ha-929592-m03) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:06:49.982795   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:49.982674   28182 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:06:49.982892   27433 main.go:141] libmachine: (ha-929592-m03) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:06:50.221371   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:50.221237   28182 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa...
	I0914 17:06:50.314576   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:50.314467   28182 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/ha-929592-m03.rawdisk...
	I0914 17:06:50.314603   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Writing magic tar header
	I0914 17:06:50.314615   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Writing SSH key tar header
	I0914 17:06:50.314623   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:50.314588   28182 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03 ...
	I0914 17:06:50.314739   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03
	I0914 17:06:50.314763   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03 (perms=drwx------)
	I0914 17:06:50.314777   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:06:50.314793   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:06:50.314811   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:06:50.314826   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:06:50.314888   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:06:50.314913   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:06:50.314923   27433 main.go:141] libmachine: (ha-929592-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:06:50.314949   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:06:50.314970   27433 main.go:141] libmachine: (ha-929592-m03) Creating domain...
	I0914 17:06:50.314981   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:06:50.314998   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:06:50.315018   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Checking permissions on dir: /home
	I0914 17:06:50.315033   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Skipping /home - not owner
	I0914 17:06:50.315929   27433 main.go:141] libmachine: (ha-929592-m03) define libvirt domain using xml: 
	I0914 17:06:50.315943   27433 main.go:141] libmachine: (ha-929592-m03) <domain type='kvm'>
	I0914 17:06:50.315952   27433 main.go:141] libmachine: (ha-929592-m03)   <name>ha-929592-m03</name>
	I0914 17:06:50.315959   27433 main.go:141] libmachine: (ha-929592-m03)   <memory unit='MiB'>2200</memory>
	I0914 17:06:50.315966   27433 main.go:141] libmachine: (ha-929592-m03)   <vcpu>2</vcpu>
	I0914 17:06:50.315972   27433 main.go:141] libmachine: (ha-929592-m03)   <features>
	I0914 17:06:50.315980   27433 main.go:141] libmachine: (ha-929592-m03)     <acpi/>
	I0914 17:06:50.315988   27433 main.go:141] libmachine: (ha-929592-m03)     <apic/>
	I0914 17:06:50.315999   27433 main.go:141] libmachine: (ha-929592-m03)     <pae/>
	I0914 17:06:50.316006   27433 main.go:141] libmachine: (ha-929592-m03)     
	I0914 17:06:50.316017   27433 main.go:141] libmachine: (ha-929592-m03)   </features>
	I0914 17:06:50.316033   27433 main.go:141] libmachine: (ha-929592-m03)   <cpu mode='host-passthrough'>
	I0914 17:06:50.316058   27433 main.go:141] libmachine: (ha-929592-m03)   
	I0914 17:06:50.316093   27433 main.go:141] libmachine: (ha-929592-m03)   </cpu>
	I0914 17:06:50.316102   27433 main.go:141] libmachine: (ha-929592-m03)   <os>
	I0914 17:06:50.316108   27433 main.go:141] libmachine: (ha-929592-m03)     <type>hvm</type>
	I0914 17:06:50.316115   27433 main.go:141] libmachine: (ha-929592-m03)     <boot dev='cdrom'/>
	I0914 17:06:50.316122   27433 main.go:141] libmachine: (ha-929592-m03)     <boot dev='hd'/>
	I0914 17:06:50.316131   27433 main.go:141] libmachine: (ha-929592-m03)     <bootmenu enable='no'/>
	I0914 17:06:50.316137   27433 main.go:141] libmachine: (ha-929592-m03)   </os>
	I0914 17:06:50.316145   27433 main.go:141] libmachine: (ha-929592-m03)   <devices>
	I0914 17:06:50.316152   27433 main.go:141] libmachine: (ha-929592-m03)     <disk type='file' device='cdrom'>
	I0914 17:06:50.316164   27433 main.go:141] libmachine: (ha-929592-m03)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/boot2docker.iso'/>
	I0914 17:06:50.316176   27433 main.go:141] libmachine: (ha-929592-m03)       <target dev='hdc' bus='scsi'/>
	I0914 17:06:50.316184   27433 main.go:141] libmachine: (ha-929592-m03)       <readonly/>
	I0914 17:06:50.316190   27433 main.go:141] libmachine: (ha-929592-m03)     </disk>
	I0914 17:06:50.316199   27433 main.go:141] libmachine: (ha-929592-m03)     <disk type='file' device='disk'>
	I0914 17:06:50.316208   27433 main.go:141] libmachine: (ha-929592-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:06:50.316219   27433 main.go:141] libmachine: (ha-929592-m03)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/ha-929592-m03.rawdisk'/>
	I0914 17:06:50.316227   27433 main.go:141] libmachine: (ha-929592-m03)       <target dev='hda' bus='virtio'/>
	I0914 17:06:50.316234   27433 main.go:141] libmachine: (ha-929592-m03)     </disk>
	I0914 17:06:50.316241   27433 main.go:141] libmachine: (ha-929592-m03)     <interface type='network'>
	I0914 17:06:50.316257   27433 main.go:141] libmachine: (ha-929592-m03)       <source network='mk-ha-929592'/>
	I0914 17:06:50.316268   27433 main.go:141] libmachine: (ha-929592-m03)       <model type='virtio'/>
	I0914 17:06:50.316281   27433 main.go:141] libmachine: (ha-929592-m03)     </interface>
	I0914 17:06:50.316293   27433 main.go:141] libmachine: (ha-929592-m03)     <interface type='network'>
	I0914 17:06:50.316301   27433 main.go:141] libmachine: (ha-929592-m03)       <source network='default'/>
	I0914 17:06:50.316311   27433 main.go:141] libmachine: (ha-929592-m03)       <model type='virtio'/>
	I0914 17:06:50.316319   27433 main.go:141] libmachine: (ha-929592-m03)     </interface>
	I0914 17:06:50.316326   27433 main.go:141] libmachine: (ha-929592-m03)     <serial type='pty'>
	I0914 17:06:50.316334   27433 main.go:141] libmachine: (ha-929592-m03)       <target port='0'/>
	I0914 17:06:50.316340   27433 main.go:141] libmachine: (ha-929592-m03)     </serial>
	I0914 17:06:50.316349   27433 main.go:141] libmachine: (ha-929592-m03)     <console type='pty'>
	I0914 17:06:50.316356   27433 main.go:141] libmachine: (ha-929592-m03)       <target type='serial' port='0'/>
	I0914 17:06:50.316364   27433 main.go:141] libmachine: (ha-929592-m03)     </console>
	I0914 17:06:50.316373   27433 main.go:141] libmachine: (ha-929592-m03)     <rng model='virtio'>
	I0914 17:06:50.316394   27433 main.go:141] libmachine: (ha-929592-m03)       <backend model='random'>/dev/random</backend>
	I0914 17:06:50.316406   27433 main.go:141] libmachine: (ha-929592-m03)     </rng>
	I0914 17:06:50.316414   27433 main.go:141] libmachine: (ha-929592-m03)     
	I0914 17:06:50.316419   27433 main.go:141] libmachine: (ha-929592-m03)     
	I0914 17:06:50.316427   27433 main.go:141] libmachine: (ha-929592-m03)   </devices>
	I0914 17:06:50.316433   27433 main.go:141] libmachine: (ha-929592-m03) </domain>
	I0914 17:06:50.316443   27433 main.go:141] libmachine: (ha-929592-m03) 
	I0914 17:06:50.323266   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:e5:cc:6e in network default
	I0914 17:06:50.323896   27433 main.go:141] libmachine: (ha-929592-m03) Ensuring networks are active...
	I0914 17:06:50.323918   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:50.324700   27433 main.go:141] libmachine: (ha-929592-m03) Ensuring network default is active
	I0914 17:06:50.324980   27433 main.go:141] libmachine: (ha-929592-m03) Ensuring network mk-ha-929592 is active
	I0914 17:06:50.325386   27433 main.go:141] libmachine: (ha-929592-m03) Getting domain xml...
	I0914 17:06:50.326282   27433 main.go:141] libmachine: (ha-929592-m03) Creating domain...
	I0914 17:06:51.593541   27433 main.go:141] libmachine: (ha-929592-m03) Waiting to get IP...
	I0914 17:06:51.594409   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:51.594884   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:51.594904   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:51.594870   28182 retry.go:31] will retry after 200.838126ms: waiting for machine to come up
	I0914 17:06:51.797364   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:51.798009   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:51.798034   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:51.797969   28182 retry.go:31] will retry after 313.647709ms: waiting for machine to come up
	I0914 17:06:52.113496   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:52.113947   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:52.113966   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:52.113898   28182 retry.go:31] will retry after 439.40481ms: waiting for machine to come up
	I0914 17:06:52.554781   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:52.555216   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:52.555242   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:52.555170   28182 retry.go:31] will retry after 393.848614ms: waiting for machine to come up
	I0914 17:06:52.950598   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:52.951214   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:52.951231   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:52.951168   28182 retry.go:31] will retry after 639.308693ms: waiting for machine to come up
	I0914 17:06:53.592100   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:53.592559   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:53.592592   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:53.592518   28182 retry.go:31] will retry after 835.193764ms: waiting for machine to come up
	I0914 17:06:54.428935   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:54.429451   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:54.429475   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:54.429380   28182 retry.go:31] will retry after 964.193112ms: waiting for machine to come up
	I0914 17:06:55.395171   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:55.395685   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:55.395709   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:55.395634   28182 retry.go:31] will retry after 1.437960076s: waiting for machine to come up
	I0914 17:06:56.835169   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:56.835619   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:56.835641   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:56.835566   28182 retry.go:31] will retry after 1.133546596s: waiting for machine to come up
	I0914 17:06:57.970597   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:06:57.971032   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:06:57.971063   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:06:57.970987   28182 retry.go:31] will retry after 2.230904983s: waiting for machine to come up
	I0914 17:07:00.204031   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:00.204476   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:00.204520   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:00.204458   28182 retry.go:31] will retry after 2.124636032s: waiting for machine to come up
	I0914 17:07:02.331821   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:02.332427   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:02.332454   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:02.332384   28182 retry.go:31] will retry after 2.29694632s: waiting for machine to come up
	I0914 17:07:04.631296   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:04.631779   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:04.631806   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:04.631744   28182 retry.go:31] will retry after 3.91983763s: waiting for machine to come up
	I0914 17:07:08.555144   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:08.555537   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find current IP address of domain ha-929592-m03 in network mk-ha-929592
	I0914 17:07:08.555559   27433 main.go:141] libmachine: (ha-929592-m03) DBG | I0914 17:07:08.555505   28182 retry.go:31] will retry after 4.766828714s: waiting for machine to come up
	I0914 17:07:13.324664   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.325434   27433 main.go:141] libmachine: (ha-929592-m03) Found IP for machine: 192.168.39.39
	I0914 17:07:13.325460   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has current primary IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.325469   27433 main.go:141] libmachine: (ha-929592-m03) Reserving static IP address...
	I0914 17:07:13.325740   27433 main.go:141] libmachine: (ha-929592-m03) DBG | unable to find host DHCP lease matching {name: "ha-929592-m03", mac: "52:54:00:49:df:f1", ip: "192.168.39.39"} in network mk-ha-929592
	I0914 17:07:13.401574   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Getting to WaitForSSH function...
	I0914 17:07:13.401603   27433 main.go:141] libmachine: (ha-929592-m03) Reserved static IP address: 192.168.39.39
	I0914 17:07:13.401615   27433 main.go:141] libmachine: (ha-929592-m03) Waiting for SSH to be available...
	I0914 17:07:13.404445   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.404909   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.404940   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.405056   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Using SSH client type: external
	I0914 17:07:13.405094   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa (-rw-------)
	I0914 17:07:13.405147   27433 main.go:141] libmachine: (ha-929592-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:07:13.405170   27433 main.go:141] libmachine: (ha-929592-m03) DBG | About to run SSH command:
	I0914 17:07:13.405224   27433 main.go:141] libmachine: (ha-929592-m03) DBG | exit 0
	I0914 17:07:13.530202   27433 main.go:141] libmachine: (ha-929592-m03) DBG | SSH cmd err, output: <nil>: 
	I0914 17:07:13.530466   27433 main.go:141] libmachine: (ha-929592-m03) KVM machine creation complete!
	I0914 17:07:13.530781   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetConfigRaw
	I0914 17:07:13.531380   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:13.531612   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:13.531756   27433 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:07:13.531768   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:07:13.533021   27433 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:07:13.533034   27433 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:07:13.533040   27433 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:07:13.533045   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.535327   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.535730   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.535757   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.535889   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.536046   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.536188   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.536356   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.536501   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.536699   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.536709   27433 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:07:13.641272   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:07:13.641296   27433 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:07:13.641308   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.643788   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.644117   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.644149   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.644268   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.644457   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.644620   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.644732   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.645034   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.645191   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.645202   27433 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:07:13.750656   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:07:13.750730   27433 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:07:13.750740   27433 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:07:13.750748   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:07:13.750984   27433 buildroot.go:166] provisioning hostname "ha-929592-m03"
	I0914 17:07:13.751012   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:07:13.751184   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.754244   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.754720   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.754749   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.754907   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.755117   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.755296   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.755467   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.755674   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.755831   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.755843   27433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592-m03 && echo "ha-929592-m03" | sudo tee /etc/hostname
	I0914 17:07:13.876961   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592-m03
	
	I0914 17:07:13.876988   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:13.879711   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.880064   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.880084   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.880284   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:13.880457   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.880588   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:13.880672   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:13.880841   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:13.881036   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:13.881058   27433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:07:13.994801   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:07:13.994834   27433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:07:13.994853   27433 buildroot.go:174] setting up certificates
	I0914 17:07:13.994863   27433 provision.go:84] configureAuth start
	I0914 17:07:13.994872   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetMachineName
	I0914 17:07:13.995128   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:13.997466   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.997846   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:13.997878   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:13.998074   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.000477   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.000823   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.000849   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.001022   27433 provision.go:143] copyHostCerts
	I0914 17:07:14.001054   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:07:14.001086   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:07:14.001096   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:07:14.001164   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:07:14.001239   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:07:14.001257   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:07:14.001263   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:07:14.001286   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:07:14.001344   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:07:14.001361   27433 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:07:14.001367   27433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:07:14.001388   27433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:07:14.001437   27433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592-m03 san=[127.0.0.1 192.168.39.39 ha-929592-m03 localhost minikube]
	I0914 17:07:14.186720   27433 provision.go:177] copyRemoteCerts
	I0914 17:07:14.186780   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:07:14.186804   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.189322   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.189635   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.189665   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.189807   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.190094   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.190290   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.190499   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:14.273407   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:07:14.273472   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:07:14.298629   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:07:14.298702   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 17:07:14.323719   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:07:14.323790   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:07:14.349008   27433 provision.go:87] duration metric: took 354.131771ms to configureAuth
	I0914 17:07:14.349042   27433 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:07:14.349265   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:07:14.349341   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.351884   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.352193   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.352228   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.352371   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.352615   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.352788   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.352934   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.353086   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:14.353238   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:14.353252   27433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:07:14.581057   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:07:14.581084   27433 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:07:14.581094   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetURL
	I0914 17:07:14.582388   27433 main.go:141] libmachine: (ha-929592-m03) DBG | Using libvirt version 6000000
	I0914 17:07:14.585025   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.585421   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.585455   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.585617   27433 main.go:141] libmachine: Docker is up and running!
	I0914 17:07:14.585632   27433 main.go:141] libmachine: Reticulating splines...
	I0914 17:07:14.585640   27433 client.go:171] duration metric: took 24.605908814s to LocalClient.Create
	I0914 17:07:14.585666   27433 start.go:167] duration metric: took 24.605970622s to libmachine.API.Create "ha-929592"
	I0914 17:07:14.585677   27433 start.go:293] postStartSetup for "ha-929592-m03" (driver="kvm2")
	I0914 17:07:14.585692   27433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:07:14.585743   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.585965   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:07:14.585987   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.588146   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.588465   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.588487   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.588623   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.588789   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.589040   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.589255   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:14.672938   27433 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:07:14.677354   27433 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:07:14.677381   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:07:14.677450   27433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:07:14.677518   27433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:07:14.677527   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:07:14.677625   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:07:14.687459   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:07:14.714644   27433 start.go:296] duration metric: took 128.952663ms for postStartSetup
	I0914 17:07:14.714698   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetConfigRaw
	I0914 17:07:14.715290   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:14.718212   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.718594   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.718622   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.719033   27433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:07:14.719244   27433 start.go:128] duration metric: took 24.759482258s to createHost
	I0914 17:07:14.719273   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.721996   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.722410   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.722437   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.722588   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.722810   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.722949   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.723063   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.723268   27433 main.go:141] libmachine: Using SSH client type: native
	I0914 17:07:14.723475   27433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0914 17:07:14.723490   27433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:07:14.830713   27433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726333634.808024922
	
	I0914 17:07:14.830732   27433 fix.go:216] guest clock: 1726333634.808024922
	I0914 17:07:14.830740   27433 fix.go:229] Guest: 2024-09-14 17:07:14.808024922 +0000 UTC Remote: 2024-09-14 17:07:14.719257775 +0000 UTC m=+142.390455536 (delta=88.767147ms)
	I0914 17:07:14.830754   27433 fix.go:200] guest clock delta is within tolerance: 88.767147ms
	I0914 17:07:14.830759   27433 start.go:83] releasing machines lock for "ha-929592-m03", held for 24.871132115s
	I0914 17:07:14.830776   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.831059   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:14.833686   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.834135   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.834181   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.836475   27433 out.go:177] * Found network options:
	I0914 17:07:14.837543   27433 out.go:177]   - NO_PROXY=192.168.39.54,192.168.39.148
	W0914 17:07:14.838926   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 17:07:14.838951   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:07:14.838967   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.839606   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.839788   27433 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:07:14.839890   27433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:07:14.839932   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	W0914 17:07:14.840000   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 17:07:14.840039   27433 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 17:07:14.840105   27433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:07:14.840131   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:07:14.842687   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.842834   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.843104   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.843135   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.843272   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.843373   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:14.843396   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:14.843439   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.843587   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:07:14.843632   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.843708   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:07:14.843750   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:14.843874   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:07:14.844012   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:07:15.088977   27433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:07:15.094790   27433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:07:15.094865   27433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:07:15.110819   27433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:07:15.110845   27433 start.go:495] detecting cgroup driver to use...
	I0914 17:07:15.110902   27433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:07:15.129575   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:07:15.144157   27433 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:07:15.144209   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:07:15.158840   27433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:07:15.172747   27433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:07:15.286758   27433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:07:15.433698   27433 docker.go:233] disabling docker service ...
	I0914 17:07:15.433766   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:07:15.448613   27433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:07:15.462147   27433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:07:15.599607   27433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:07:15.723635   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:07:15.738666   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:07:15.758494   27433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:07:15.758555   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.772003   27433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:07:15.772077   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.783795   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.795318   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.806340   27433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:07:15.816626   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.827989   27433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.844682   27433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:07:15.854673   27433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:07:15.864167   27433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:07:15.864218   27433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:07:15.878610   27433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:07:15.888865   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:07:15.996873   27433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:07:16.084308   27433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:07:16.084378   27433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:07:16.089222   27433 start.go:563] Will wait 60s for crictl version
	I0914 17:07:16.089276   27433 ssh_runner.go:195] Run: which crictl
	I0914 17:07:16.092822   27433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:07:16.128255   27433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:07:16.128362   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:07:16.156435   27433 ssh_runner.go:195] Run: crio --version
	I0914 17:07:16.185307   27433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:07:16.186498   27433 out.go:177]   - env NO_PROXY=192.168.39.54
	I0914 17:07:16.187780   27433 out.go:177]   - env NO_PROXY=192.168.39.54,192.168.39.148
	I0914 17:07:16.189038   27433 main.go:141] libmachine: (ha-929592-m03) Calling .GetIP
	I0914 17:07:16.191764   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:16.192143   27433 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:07:16.192166   27433 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:07:16.192408   27433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:07:16.196706   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:07:16.209144   27433 mustload.go:65] Loading cluster: ha-929592
	I0914 17:07:16.209417   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:07:16.209682   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:07:16.209721   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:07:16.224831   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0914 17:07:16.225273   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:07:16.225816   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:07:16.225843   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:07:16.226138   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:07:16.226315   27433 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:07:16.227704   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:07:16.228102   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:07:16.228146   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:07:16.242690   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36155
	I0914 17:07:16.243081   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:07:16.243552   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:07:16.243573   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:07:16.243935   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:07:16.244132   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:07:16.244309   27433 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.39
	I0914 17:07:16.244323   27433 certs.go:194] generating shared ca certs ...
	I0914 17:07:16.244339   27433 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:07:16.244469   27433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:07:16.244521   27433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:07:16.244533   27433 certs.go:256] generating profile certs ...
	I0914 17:07:16.244631   27433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:07:16.244662   27433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d
	I0914 17:07:16.244680   27433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.148 192.168.39.39 192.168.39.254]
	I0914 17:07:16.555188   27433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d ...
	I0914 17:07:16.555218   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d: {Name:mk293944dbe0571c4a4a3bd4d63886ec79fd8aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:07:16.555415   27433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d ...
	I0914 17:07:16.555435   27433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d: {Name:mkab68f22df16a01bf03af3d7236b02f34cdef65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:07:16.555543   27433 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.3049b98d -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:07:16.555702   27433 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.3049b98d -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:07:16.555858   27433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:07:16.555875   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:07:16.555893   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:07:16.555910   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:07:16.555930   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:07:16.555949   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:07:16.555968   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:07:16.555986   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:07:16.570279   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:07:16.570409   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:07:16.570460   27433 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:07:16.570473   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:07:16.570507   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:07:16.570540   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:07:16.570572   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:07:16.570629   27433 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:07:16.570680   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:16.570702   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:07:16.570724   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:07:16.570772   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:07:16.573823   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:16.574264   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:07:16.574292   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:16.574464   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:07:16.574669   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:07:16.574848   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:07:16.574961   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:07:16.654584   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0914 17:07:16.660317   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0914 17:07:16.671440   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0914 17:07:16.677084   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0914 17:07:16.687544   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0914 17:07:16.691970   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0914 17:07:16.703302   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0914 17:07:16.707644   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0914 17:07:16.719098   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0914 17:07:16.723753   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0914 17:07:16.742558   27433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0914 17:07:16.746769   27433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0914 17:07:16.759625   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:07:16.787721   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:07:16.812656   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:07:16.835889   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:07:16.860258   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0914 17:07:16.884399   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 17:07:16.909899   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:07:16.934622   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:07:16.959438   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:07:16.982628   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:07:17.005524   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:07:17.031425   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0914 17:07:17.047634   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0914 17:07:17.064668   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0914 17:07:17.080829   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0914 17:07:17.097388   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0914 17:07:17.113555   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0914 17:07:17.131406   27433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0914 17:07:17.148831   27433 ssh_runner.go:195] Run: openssl version
	I0914 17:07:17.155139   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:07:17.166934   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:07:17.171390   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:07:17.171450   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:07:17.177195   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:07:17.187600   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:07:17.198704   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:17.203174   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:17.203227   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:07:17.208809   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:07:17.219464   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:07:17.230052   27433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:07:17.234895   27433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:07:17.234970   27433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:07:17.241057   27433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:07:17.253229   27433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:07:17.257647   27433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:07:17.257708   27433 kubeadm.go:934] updating node {m03 192.168.39.39 8443 v1.31.1 crio true true} ...
	I0914 17:07:17.257784   27433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:07:17.257809   27433 kube-vip.go:115] generating kube-vip config ...
	I0914 17:07:17.257843   27433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:07:17.274638   27433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:07:17.274697   27433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:07:17.274742   27433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:07:17.284442   27433 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0914 17:07:17.284516   27433 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0914 17:07:17.293975   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0914 17:07:17.294003   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:07:17.294035   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0914 17:07:17.294058   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:07:17.294061   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 17:07:17.294114   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 17:07:17.294034   27433 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0914 17:07:17.294185   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:07:17.307956   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0914 17:07:17.307987   27433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:07:17.307990   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0914 17:07:17.308030   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0914 17:07:17.308057   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0914 17:07:17.308068   27433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 17:07:17.340653   27433 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0914 17:07:17.340701   27433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0914 17:07:18.120396   27433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0914 17:07:18.130120   27433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0914 17:07:18.147144   27433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:07:18.163645   27433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 17:07:18.179930   27433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:07:18.183757   27433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:07:18.195632   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:07:18.309959   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:07:18.327594   27433 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:07:18.327934   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:07:18.327995   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:07:18.344958   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39999
	I0914 17:07:18.345522   27433 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:07:18.346106   27433 main.go:141] libmachine: Using API Version  1
	I0914 17:07:18.346127   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:07:18.346507   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:07:18.346686   27433 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:07:18.346847   27433 start.go:317] joinCluster: &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:07:18.346974   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0914 17:07:18.346995   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:07:18.350241   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:18.350751   27433 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:07:18.350781   27433 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:07:18.350984   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:07:18.351165   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:07:18.351322   27433 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:07:18.351493   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:07:18.506210   27433 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:07:18.506264   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u5ad97.nitviectgjwmq8kn --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m03 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443"
	I0914 17:07:41.528053   27433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u5ad97.nitviectgjwmq8kn --discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-929592-m03 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443": (23.021765461s)
	I0914 17:07:41.528091   27433 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0914 17:07:42.019670   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-929592-m03 minikube.k8s.io/updated_at=2024_09_14T17_07_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=ha-929592 minikube.k8s.io/primary=false
	I0914 17:07:42.171268   27433 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-929592-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0914 17:07:42.295912   27433 start.go:319] duration metric: took 23.949060276s to joinCluster
	I0914 17:07:42.295986   27433 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:07:42.296305   27433 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:07:42.297747   27433 out.go:177] * Verifying Kubernetes components...
	I0914 17:07:42.299464   27433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:07:42.487043   27433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:07:42.509749   27433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:07:42.510103   27433 kapi.go:59] client config for ha-929592: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0914 17:07:42.510224   27433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.54:8443
	I0914 17:07:42.510505   27433 node_ready.go:35] waiting up to 6m0s for node "ha-929592-m03" to be "Ready" ...
	I0914 17:07:42.510592   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:42.510603   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:42.510615   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:42.510623   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:42.514443   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:43.011413   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:43.011440   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:43.011450   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:43.011455   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:43.014989   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:43.511236   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:43.511266   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:43.511275   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:43.511279   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:43.514916   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:44.010785   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:44.010812   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:44.010823   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:44.010833   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:44.014331   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:44.511105   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:44.511126   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:44.511136   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:44.511141   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:44.515073   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:44.515807   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:45.011166   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:45.011189   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:45.011199   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:45.011205   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:45.014925   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:45.511405   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:45.511441   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:45.511453   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:45.511460   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:45.515149   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:46.011420   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:46.011446   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:46.011454   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:46.011458   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:46.016666   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:07:46.511346   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:46.511372   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:46.511384   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:46.511390   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:46.514823   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:47.010782   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:47.010803   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:47.010811   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:47.010815   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:47.014205   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:47.015167   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:47.511176   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:47.511204   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:47.511215   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:47.511220   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:47.514771   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:48.011464   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:48.011495   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:48.011508   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:48.011513   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:48.014851   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:48.510761   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:48.510781   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:48.510790   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:48.510798   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:48.514178   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:49.010982   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:49.011004   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:49.011012   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:49.011015   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:49.014046   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:49.510942   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:49.510965   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:49.510973   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:49.510977   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:49.514316   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:49.515138   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:50.011544   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:50.011568   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:50.011581   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:50.011586   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:50.015427   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:50.510672   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:50.510694   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:50.510702   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:50.510710   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:50.513629   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:51.011048   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:51.011070   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:51.011078   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:51.011084   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:51.014109   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:51.511653   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:51.511678   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:51.511689   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:51.511695   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:51.515229   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:51.515942   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:52.011425   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:52.011452   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:52.011464   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:52.011469   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:52.019846   27433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 17:07:52.510858   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:52.510880   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:52.510891   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:52.510898   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:52.514404   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:53.011440   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:53.011465   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:53.011477   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:53.011485   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:53.014917   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:53.511224   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:53.511245   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:53.511253   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:53.511257   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:53.514437   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:54.011402   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:54.011428   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:54.011440   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:54.011448   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:54.015375   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:54.015952   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:54.511426   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:54.511452   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:54.511463   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:54.511472   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:54.514757   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:55.011159   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:55.011198   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:55.011209   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:55.011214   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:55.015773   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:07:55.511126   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:55.511150   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:55.511157   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:55.511162   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:55.514253   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:56.011556   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:56.011580   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:56.011591   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:56.011597   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:56.014897   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:56.510753   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:56.510778   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:56.510788   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:56.510793   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:56.513948   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:56.514410   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:57.010683   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:57.010707   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:57.010717   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:57.010721   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:57.014048   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:57.511695   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:57.511717   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:57.511726   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:57.511731   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:57.515681   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:58.011422   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:58.011444   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:58.011452   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:58.011457   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:58.014905   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:58.511392   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:58.511414   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:58.511423   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:58.511431   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:58.514718   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:58.515272   27433 node_ready.go:53] node "ha-929592-m03" has status "Ready":"False"
	I0914 17:07:59.010735   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.010761   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.010769   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.010772   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.014521   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:59.511489   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.511513   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.511523   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.511530   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.514753   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:59.515339   27433 node_ready.go:49] node "ha-929592-m03" has status "Ready":"True"
	I0914 17:07:59.515357   27433 node_ready.go:38] duration metric: took 17.004834009s for node "ha-929592-m03" to be "Ready" ...
	I0914 17:07:59.515365   27433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:07:59.515434   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:07:59.515444   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.515450   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.515455   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.522045   27433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 17:07:59.528668   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.528756   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-66txm
	I0914 17:07:59.528767   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.528774   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.528781   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.531693   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.532279   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:07:59.532294   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.532303   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.532308   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.534773   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.535256   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.535273   27433 pod_ready.go:82] duration metric: took 6.579112ms for pod "coredns-7c65d6cfc9-66txm" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.535288   27433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.535372   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-dpdz4
	I0914 17:07:59.535382   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.535394   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.535404   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.537717   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.538650   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:07:59.538663   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.538673   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.538682   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.541062   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.541448   27433 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.541466   27433 pod_ready.go:82] duration metric: took 6.151987ms for pod "coredns-7c65d6cfc9-dpdz4" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.541478   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.541535   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592
	I0914 17:07:59.541545   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.541555   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.541564   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.544527   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.545638   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:07:59.545655   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.545665   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.545671   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.548376   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.548981   27433 pod_ready.go:93] pod "etcd-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.548996   27433 pod_ready.go:82] duration metric: took 7.512177ms for pod "etcd-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.549005   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.549051   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m02
	I0914 17:07:59.549058   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.549065   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.549070   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.551588   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.552368   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:07:59.552383   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.552390   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.552394   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.554872   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.555856   27433 pod_ready.go:93] pod "etcd-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.555876   27433 pod_ready.go:82] duration metric: took 6.864629ms for pod "etcd-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.555887   27433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.712256   27433 request.go:632] Waited for 156.310735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m03
	I0914 17:07:59.712343   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/etcd-ha-929592-m03
	I0914 17:07:59.712353   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.712361   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.712365   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.715318   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:07:59.912436   27433 request.go:632] Waited for 196.378799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.912490   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:07:59.912496   27433 round_trippers.go:469] Request Headers:
	I0914 17:07:59.912506   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:07:59.912516   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:07:59.915904   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:07:59.916335   27433 pod_ready.go:93] pod "etcd-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:07:59.916351   27433 pod_ready.go:82] duration metric: took 360.458353ms for pod "etcd-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:07:59.916366   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.111821   27433 request.go:632] Waited for 195.355844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:08:00.111900   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592
	I0914 17:08:00.111946   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.111962   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.111970   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.115605   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:00.311543   27433 request.go:632] Waited for 195.332136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:00.311595   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:00.311602   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.311610   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.311615   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.314945   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:00.315616   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:00.315636   27433 pod_ready.go:82] duration metric: took 399.261529ms for pod "kube-apiserver-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.315645   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.511723   27433 request.go:632] Waited for 196.0201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:08:00.511801   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m02
	I0914 17:08:00.511808   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.511816   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.511821   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.515903   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:08:00.711977   27433 request.go:632] Waited for 195.376236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:00.712065   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:00.712075   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.712086   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.712110   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.715693   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:00.716183   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:00.716205   27433 pod_ready.go:82] duration metric: took 400.553404ms for pod "kube-apiserver-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.716214   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:00.912270   27433 request.go:632] Waited for 195.977695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m03
	I0914 17:08:00.912360   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-929592-m03
	I0914 17:08:00.912372   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:00.912384   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:00.912391   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:00.915823   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.111913   27433 request.go:632] Waited for 195.353778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:01.111967   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:01.111972   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.111980   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.111987   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.115411   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.115930   27433 pod_ready.go:93] pod "kube-apiserver-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:01.115948   27433 pod_ready.go:82] duration metric: took 399.728067ms for pod "kube-apiserver-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.115959   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.312018   27433 request.go:632] Waited for 196.000899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:08:01.312096   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592
	I0914 17:08:01.312102   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.312109   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.312118   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.315329   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.512459   27433 request.go:632] Waited for 196.354283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:01.512516   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:01.512523   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.512540   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.512551   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.515821   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.516343   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:01.516360   27433 pod_ready.go:82] duration metric: took 400.394788ms for pod "kube-controller-manager-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.516369   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.712409   27433 request.go:632] Waited for 195.9831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:08:01.712468   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m02
	I0914 17:08:01.712473   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.712480   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.712494   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.715865   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.911801   27433 request.go:632] Waited for 195.22504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:01.911855   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:01.911860   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:01.911868   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:01.911872   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:01.914916   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:01.915735   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:01.915756   27433 pod_ready.go:82] duration metric: took 399.381165ms for pod "kube-controller-manager-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:01.915766   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.111729   27433 request.go:632] Waited for 195.905392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m03
	I0914 17:08:02.111808   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-929592-m03
	I0914 17:08:02.111813   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.111820   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.111825   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.115762   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.311712   27433 request.go:632] Waited for 195.305414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.311765   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.311771   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.311778   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.311782   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.315533   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.316362   27433 pod_ready.go:93] pod "kube-controller-manager-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:02.316379   27433 pod_ready.go:82] duration metric: took 400.606521ms for pod "kube-controller-manager-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.316388   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-59tn8" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.512364   27433 request.go:632] Waited for 195.91592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59tn8
	I0914 17:08:02.512416   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-59tn8
	I0914 17:08:02.512421   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.512432   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.512435   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.515841   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.712317   27433 request.go:632] Waited for 195.69444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.712371   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:02.712376   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.712387   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.712391   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.715600   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:02.716105   27433 pod_ready.go:93] pod "kube-proxy-59tn8" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:02.716120   27433 pod_ready.go:82] duration metric: took 399.72639ms for pod "kube-proxy-59tn8" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.716129   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:02.912231   27433 request.go:632] Waited for 196.029636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:08:02.912304   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zqmd
	I0914 17:08:02.912312   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:02.912331   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:02.912340   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:02.915878   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.111910   27433 request.go:632] Waited for 195.368033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.111964   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.111970   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.111980   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.111986   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.115005   27433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 17:08:03.115607   27433 pod_ready.go:93] pod "kube-proxy-6zqmd" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:03.115625   27433 pod_ready.go:82] duration metric: took 399.488925ms for pod "kube-proxy-6zqmd" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.115638   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.311737   27433 request.go:632] Waited for 196.030438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:08:03.311790   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bcfkb
	I0914 17:08:03.311805   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.311829   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.311838   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.315138   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.512204   27433 request.go:632] Waited for 196.423291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:03.512312   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:03.512324   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.512334   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.512342   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.515939   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.516428   27433 pod_ready.go:93] pod "kube-proxy-bcfkb" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:03.516445   27433 pod_ready.go:82] duration metric: took 400.79981ms for pod "kube-proxy-bcfkb" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.516453   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.712532   27433 request.go:632] Waited for 196.016889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:08:03.712629   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592
	I0914 17:08:03.712640   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.712658   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.712681   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.715857   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.911726   27433 request.go:632] Waited for 195.299661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.911809   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592
	I0914 17:08:03.911815   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:03.911823   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:03.911826   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:03.915494   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:03.916421   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:03.916442   27433 pod_ready.go:82] duration metric: took 399.98128ms for pod "kube-scheduler-ha-929592" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:03.916454   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.112507   27433 request.go:632] Waited for 195.977843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:08:04.112577   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m02
	I0914 17:08:04.112583   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.112591   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.112595   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.116079   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.311999   27433 request.go:632] Waited for 195.359722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:04.312069   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m02
	I0914 17:08:04.312075   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.312084   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.312092   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.315519   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.316009   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:04.316029   27433 pod_ready.go:82] duration metric: took 399.567246ms for pod "kube-scheduler-ha-929592-m02" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.316039   27433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.512295   27433 request.go:632] Waited for 196.193669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m03
	I0914 17:08:04.512364   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-929592-m03
	I0914 17:08:04.512370   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.512378   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.512382   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.515471   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.712510   27433 request.go:632] Waited for 196.379641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:04.712573   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes/ha-929592-m03
	I0914 17:08:04.712578   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.712586   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.712590   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.715934   27433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 17:08:04.716488   27433 pod_ready.go:93] pod "kube-scheduler-ha-929592-m03" in "kube-system" namespace has status "Ready":"True"
	I0914 17:08:04.716509   27433 pod_ready.go:82] duration metric: took 400.462713ms for pod "kube-scheduler-ha-929592-m03" in "kube-system" namespace to be "Ready" ...
	I0914 17:08:04.716525   27433 pod_ready.go:39] duration metric: took 5.201150381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:08:04.716544   27433 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:08:04.716616   27433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:08:04.733284   27433 api_server.go:72] duration metric: took 22.437250379s to wait for apiserver process to appear ...
	I0914 17:08:04.733311   27433 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:08:04.733349   27433 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0914 17:08:04.738026   27433 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0914 17:08:04.738103   27433 round_trippers.go:463] GET https://192.168.39.54:8443/version
	I0914 17:08:04.738113   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.738124   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.738134   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.739076   27433 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 17:08:04.739139   27433 api_server.go:141] control plane version: v1.31.1
	I0914 17:08:04.739154   27433 api_server.go:131] duration metric: took 5.836544ms to wait for apiserver health ...
	I0914 17:08:04.739161   27433 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:08:04.911477   27433 request.go:632] Waited for 172.249655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:04.911556   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:04.911563   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:04.911571   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:04.911578   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:04.924316   27433 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0914 17:08:04.931607   27433 system_pods.go:59] 24 kube-system pods found
	I0914 17:08:04.931637   27433 system_pods.go:61] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:08:04.931643   27433 system_pods.go:61] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:08:04.931648   27433 system_pods.go:61] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:08:04.931651   27433 system_pods.go:61] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:08:04.931654   27433 system_pods.go:61] "etcd-ha-929592-m03" [2542afd7-8c6a-4c02-aa3e-915d68aae931] Running
	I0914 17:08:04.931657   27433 system_pods.go:61] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:08:04.931660   27433 system_pods.go:61] "kindnet-j7mjh" [8d1280e5-c9aa-4625-9dfc-14da09ba4849] Running
	I0914 17:08:04.931663   27433 system_pods.go:61] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:08:04.931666   27433 system_pods.go:61] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:08:04.931669   27433 system_pods.go:61] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:08:04.931672   27433 system_pods.go:61] "kube-apiserver-ha-929592-m03" [07b3480d-6b12-42c7-a18f-587f6b55ec3d] Running
	I0914 17:08:04.931676   27433 system_pods.go:61] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:08:04.931679   27433 system_pods.go:61] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:08:04.931682   27433 system_pods.go:61] "kube-controller-manager-ha-929592-m03" [e0390d32-83b3-473c-a451-ea8d75b17d27] Running
	I0914 17:08:04.931685   27433 system_pods.go:61] "kube-proxy-59tn8" [fcc0929a-58ed-4bd8-9e93-b14e6d49eeef] Running
	I0914 17:08:04.931687   27433 system_pods.go:61] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:08:04.931691   27433 system_pods.go:61] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:08:04.931693   27433 system_pods.go:61] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:08:04.931696   27433 system_pods.go:61] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:08:04.931699   27433 system_pods.go:61] "kube-scheduler-ha-929592-m03" [a27d6148-c5d7-487e-bf9d-4625d432957b] Running
	I0914 17:08:04.931702   27433 system_pods.go:61] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:08:04.931706   27433 system_pods.go:61] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:08:04.931709   27433 system_pods.go:61] "kube-vip-ha-929592-m03" [9a6742f3-75d2-4630-bf31-fabb4040c533] Running
	I0914 17:08:04.931712   27433 system_pods.go:61] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:08:04.931718   27433 system_pods.go:74] duration metric: took 192.548327ms to wait for pod list to return data ...
	I0914 17:08:04.931729   27433 default_sa.go:34] waiting for default service account to be created ...
	I0914 17:08:05.112535   27433 request.go:632] Waited for 180.737287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:08:05.112589   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/default/serviceaccounts
	I0914 17:08:05.112594   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:05.112606   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:05.112610   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:05.116810   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:08:05.116919   27433 default_sa.go:45] found service account: "default"
	I0914 17:08:05.116932   27433 default_sa.go:55] duration metric: took 185.197585ms for default service account to be created ...
	I0914 17:08:05.116940   27433 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 17:08:05.311806   27433 request.go:632] Waited for 194.786419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:05.311878   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/namespaces/kube-system/pods
	I0914 17:08:05.311886   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:05.311899   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:05.311906   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:05.317165   27433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 17:08:05.323927   27433 system_pods.go:86] 24 kube-system pods found
	I0914 17:08:05.323957   27433 system_pods.go:89] "coredns-7c65d6cfc9-66txm" [abf3ed52-ab5a-4415-a8a9-78e567d60348] Running
	I0914 17:08:05.323963   27433 system_pods.go:89] "coredns-7c65d6cfc9-dpdz4" [2a751c8d-890c-402e-846f-8f61e3fd1965] Running
	I0914 17:08:05.323967   27433 system_pods.go:89] "etcd-ha-929592" [44b8df66-0b5f-4b5b-a901-92161d29df28] Running
	I0914 17:08:05.323971   27433 system_pods.go:89] "etcd-ha-929592-m02" [fe6343ec-40b1-4808-8902-041b935081bf] Running
	I0914 17:08:05.323974   27433 system_pods.go:89] "etcd-ha-929592-m03" [2542afd7-8c6a-4c02-aa3e-915d68aae931] Running
	I0914 17:08:05.323979   27433 system_pods.go:89] "kindnet-fw757" [51a38d95-fd50-4c05-a75d-a3dfeae127bd] Running
	I0914 17:08:05.323983   27433 system_pods.go:89] "kindnet-j7mjh" [8d1280e5-c9aa-4625-9dfc-14da09ba4849] Running
	I0914 17:08:05.323986   27433 system_pods.go:89] "kindnet-tnjsl" [ec9f109d-14b3-4e4d-9530-4ae493984cc5] Running
	I0914 17:08:05.323990   27433 system_pods.go:89] "kube-apiserver-ha-929592" [fe3e7895-32dc-4542-879c-9bb609604c69] Running
	I0914 17:08:05.323994   27433 system_pods.go:89] "kube-apiserver-ha-929592-m02" [4544a586-c111-4461-8f25-a3843da19bfb] Running
	I0914 17:08:05.323997   27433 system_pods.go:89] "kube-apiserver-ha-929592-m03" [07b3480d-6b12-42c7-a18f-587f6b55ec3d] Running
	I0914 17:08:05.324001   27433 system_pods.go:89] "kube-controller-manager-ha-929592" [12a2c768-5d90-4036-aff7-d80da243c602] Running
	I0914 17:08:05.324008   27433 system_pods.go:89] "kube-controller-manager-ha-929592-m02" [bb5d3040-c09e-4eb6-94a3-4bdb34e4e658] Running
	I0914 17:08:05.324011   27433 system_pods.go:89] "kube-controller-manager-ha-929592-m03" [e0390d32-83b3-473c-a451-ea8d75b17d27] Running
	I0914 17:08:05.324014   27433 system_pods.go:89] "kube-proxy-59tn8" [fcc0929a-58ed-4bd8-9e93-b14e6d49eeef] Running
	I0914 17:08:05.324018   27433 system_pods.go:89] "kube-proxy-6zqmd" [b7beddc8-ce6a-44ed-b3e8-423baf620bbb] Running
	I0914 17:08:05.324021   27433 system_pods.go:89] "kube-proxy-bcfkb" [f2ed6784-8935-4b20-9321-650ffb8dacda] Running
	I0914 17:08:05.324027   27433 system_pods.go:89] "kube-scheduler-ha-929592" [02b347db-39cc-49d5-a736-05957f446708] Running
	I0914 17:08:05.324030   27433 system_pods.go:89] "kube-scheduler-ha-929592-m02" [a5dde5dc-208f-47c3-903f-ce811cb58f56] Running
	I0914 17:08:05.324036   27433 system_pods.go:89] "kube-scheduler-ha-929592-m03" [a27d6148-c5d7-487e-bf9d-4625d432957b] Running
	I0914 17:08:05.324039   27433 system_pods.go:89] "kube-vip-ha-929592" [8bec83fe-1516-467a-9575-3c55dbcbda23] Running
	I0914 17:08:05.324044   27433 system_pods.go:89] "kube-vip-ha-929592-m02" [852625cb-9e2b-4a4f-9471-80d275a6697b] Running
	I0914 17:08:05.324048   27433 system_pods.go:89] "kube-vip-ha-929592-m03" [9a6742f3-75d2-4630-bf31-fabb4040c533] Running
	I0914 17:08:05.324054   27433 system_pods.go:89] "storage-provisioner" [4f486484-9641-4e23-8bc9-4dcae57b621a] Running
	I0914 17:08:05.324061   27433 system_pods.go:126] duration metric: took 207.11334ms to wait for k8s-apps to be running ...
	I0914 17:08:05.324070   27433 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 17:08:05.324112   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:08:05.339239   27433 system_svc.go:56] duration metric: took 15.157926ms WaitForService to wait for kubelet
	I0914 17:08:05.339272   27433 kubeadm.go:582] duration metric: took 23.0432452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:08:05.339289   27433 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:08:05.511638   27433 request.go:632] Waited for 172.263852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.54:8443/api/v1/nodes
	I0914 17:08:05.511691   27433 round_trippers.go:463] GET https://192.168.39.54:8443/api/v1/nodes
	I0914 17:08:05.511696   27433 round_trippers.go:469] Request Headers:
	I0914 17:08:05.511704   27433 round_trippers.go:473]     Accept: application/json, */*
	I0914 17:08:05.511707   27433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 17:08:05.515995   27433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 17:08:05.517005   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:08:05.517028   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:08:05.517037   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:08:05.517041   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:08:05.517045   27433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:08:05.517048   27433 node_conditions.go:123] node cpu capacity is 2
	I0914 17:08:05.517052   27433 node_conditions.go:105] duration metric: took 177.759414ms to run NodePressure ...
	I0914 17:08:05.517064   27433 start.go:241] waiting for startup goroutines ...
	I0914 17:08:05.517085   27433 start.go:255] writing updated cluster config ...
	I0914 17:08:05.517375   27433 ssh_runner.go:195] Run: rm -f paused
	I0914 17:08:05.568912   27433 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 17:08:05.572001   27433 out.go:177] * Done! kubectl is now configured to use "ha-929592" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.861154559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333986861134073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2944ca17-f99b-44df-a66b-106f77bbbdf3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.861704705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bb14932-d408-44a9-b58f-e515a2ae6326 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.861759092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bb14932-d408-44a9-b58f-e515a2ae6326 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.861994630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726333690210089103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fcec21062afa8fcdfb822dced5eca45ebd403ba221182e4abdd623f53635ca,PodSandboxId:a615bca1c01216b9cf3d06e083d8c0ceae410e28322104032143f15a7a94115c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726333546868880012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546846633325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546840035162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab
5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263335
35088260594,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726333534777560737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8,PodSandboxId:383d700a7d746f2e9f7ceb35686a4630128c8524969a84641cd1c16713902f43,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726333526970281307,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e150edac5dabfa6dae6d65966a1e0a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726333523910232398,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726333523925784581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,PodSandboxId:2cb1c0532ae95d9a90ad1f8b984fb95a8bdda3b4bb844295f285009d3d4636b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726333523808944878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,PodSandboxId:a5a14538e219ebcd5abb61a37ffc184fe8f53c4b08117618bfa5e2ec8c0d75a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726333523800169396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bb14932-d408-44a9-b58f-e515a2ae6326 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.898485040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc4064a6-20b4-4835-97e0-f9728f3ea25a name=/runtime.v1.RuntimeService/Version
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.898559514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc4064a6-20b4-4835-97e0-f9728f3ea25a name=/runtime.v1.RuntimeService/Version
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.899808900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebbc52eb-c05f-494f-be04-44a0057e4e8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.900215193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333986900194312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebbc52eb-c05f-494f-be04-44a0057e4e8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.900750405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05351d80-9d18-4e0e-ba2b-536f2f7433ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.900798751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05351d80-9d18-4e0e-ba2b-536f2f7433ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.901044597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726333690210089103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fcec21062afa8fcdfb822dced5eca45ebd403ba221182e4abdd623f53635ca,PodSandboxId:a615bca1c01216b9cf3d06e083d8c0ceae410e28322104032143f15a7a94115c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726333546868880012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546846633325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546840035162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab
5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263335
35088260594,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726333534777560737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8,PodSandboxId:383d700a7d746f2e9f7ceb35686a4630128c8524969a84641cd1c16713902f43,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726333526970281307,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e150edac5dabfa6dae6d65966a1e0a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726333523910232398,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726333523925784581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,PodSandboxId:2cb1c0532ae95d9a90ad1f8b984fb95a8bdda3b4bb844295f285009d3d4636b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726333523808944878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,PodSandboxId:a5a14538e219ebcd5abb61a37ffc184fe8f53c4b08117618bfa5e2ec8c0d75a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726333523800169396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05351d80-9d18-4e0e-ba2b-536f2f7433ce name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.940531845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6233c0e-4bbe-454b-b04e-69070406fc77 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.940662539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6233c0e-4bbe-454b-b04e-69070406fc77 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.942421403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=550aad96-dab8-4a9b-87d1-e51e0a645130 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.943090104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333986943061139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=550aad96-dab8-4a9b-87d1-e51e0a645130 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.943892945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1f125cb-34ff-402e-8405-aa9976e33694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.943948573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1f125cb-34ff-402e-8405-aa9976e33694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.944170678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726333690210089103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fcec21062afa8fcdfb822dced5eca45ebd403ba221182e4abdd623f53635ca,PodSandboxId:a615bca1c01216b9cf3d06e083d8c0ceae410e28322104032143f15a7a94115c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726333546868880012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546846633325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546840035162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab
5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263335
35088260594,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726333534777560737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8,PodSandboxId:383d700a7d746f2e9f7ceb35686a4630128c8524969a84641cd1c16713902f43,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726333526970281307,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e150edac5dabfa6dae6d65966a1e0a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726333523910232398,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726333523925784581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,PodSandboxId:2cb1c0532ae95d9a90ad1f8b984fb95a8bdda3b4bb844295f285009d3d4636b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726333523808944878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,PodSandboxId:a5a14538e219ebcd5abb61a37ffc184fe8f53c4b08117618bfa5e2ec8c0d75a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726333523800169396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1f125cb-34ff-402e-8405-aa9976e33694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.980637579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32f5974f-cf39-44e5-8a02-4aea11852dc7 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.980713888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32f5974f-cf39-44e5-8a02-4aea11852dc7 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.981789076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fec2c439-91d3-4fc5-9fa9-cf0f8306778f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.982243754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333986982219821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fec2c439-91d3-4fc5-9fa9-cf0f8306778f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.982743974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b4b193e-5d4b-49e4-9c87-797429d01888 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.982813044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b4b193e-5d4b-49e4-9c87-797429d01888 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:13:06 ha-929592 crio[661]: time="2024-09-14 17:13:06.983054776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726333690210089103,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fcec21062afa8fcdfb822dced5eca45ebd403ba221182e4abdd623f53635ca,PodSandboxId:a615bca1c01216b9cf3d06e083d8c0ceae410e28322104032143f15a7a94115c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726333546868880012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546846633325,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726333546840035162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab
5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17263335
35088260594,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726333534777560737,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8,PodSandboxId:383d700a7d746f2e9f7ceb35686a4630128c8524969a84641cd1c16713902f43,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726333526970281307,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e150edac5dabfa6dae6d65966a1e0a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726333523910232398,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726333523925784581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00,PodSandboxId:2cb1c0532ae95d9a90ad1f8b984fb95a8bdda3b4bb844295f285009d3d4636b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726333523808944878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c,PodSandboxId:a5a14538e219ebcd5abb61a37ffc184fe8f53c4b08117618bfa5e2ec8c0d75a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726333523800169396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b4b193e-5d4b-49e4-9c87-797429d01888 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34c6ad67896f3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   e605a9e0100e5       busybox-7dff88458-49mwg
	b0fcec21062af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   a615bca1c0121       storage-provisioner
	9eb824a3acd10       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago       Running             coredns                   0                   69d86428b72f0       coredns-7c65d6cfc9-dpdz4
	06ffbf30c8c13       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago       Running             coredns                   0                   9b615a9a43e59       coredns-7c65d6cfc9-66txm
	fd34a54170b25       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   fc9e9c48c04be       kindnet-fw757
	c1571fb1d1d1f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   de29821ef5ba3       kube-proxy-6zqmd
	7b409821346de       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   383d700a7d746       kube-vip-ha-929592
	ac425bd016fb1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   282b521b3dea8       etcd-ha-929592
	972f797d73554       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   dbb138fdd1472       kube-scheduler-ha-929592
	ab1e607cdf424       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   2cb1c0532ae95       kube-apiserver-ha-929592
	363e6bc276fd6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   a5a14538e219e       kube-controller-manager-ha-929592
	
	
	==> coredns [06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f] <==
	[INFO] 10.244.1.2:56119 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000163563s
	[INFO] 10.244.1.2:55772 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00176312s
	[INFO] 10.244.0.4:42918 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163348s
	[INFO] 10.244.0.4:42643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003969379s
	[INFO] 10.244.0.4:59436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003594097s
	[INFO] 10.244.0.4:42742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196447s
	[INFO] 10.244.2.2:34834 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000264331s
	[INFO] 10.244.2.2:59462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156407s
	[INFO] 10.244.2.2:42619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326596s
	[INFO] 10.244.2.2:44804 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179359s
	[INFO] 10.244.2.2:41911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132469s
	[INFO] 10.244.2.2:33102 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102993s
	[INFO] 10.244.1.2:55754 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139996s
	[INFO] 10.244.1.2:43056 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122452s
	[INFO] 10.244.1.2:48145 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077043s
	[INFO] 10.244.0.4:52337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165468s
	[INFO] 10.244.0.4:42536 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091889s
	[INFO] 10.244.0.4:44365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064388s
	[INFO] 10.244.2.2:55168 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124822s
	[INFO] 10.244.0.4:38549 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137185s
	[INFO] 10.244.0.4:50003 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132872s
	[INFO] 10.244.2.2:52393 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098256s
	[INFO] 10.244.2.2:57699 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088711s
	[INFO] 10.244.1.2:46863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018617s
	[INFO] 10.244.1.2:35487 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119162s
	
	
	==> coredns [9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17] <==
	[INFO] 10.244.0.4:51005 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187399s
	[INFO] 10.244.0.4:48604 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001531016s
	[INFO] 10.244.0.4:52034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144239s
	[INFO] 10.244.0.4:59604 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010094s
	[INFO] 10.244.2.2:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134857s
	[INFO] 10.244.2.2:33999 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00156764s
	[INFO] 10.244.1.2:33236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120988s
	[INFO] 10.244.1.2:56330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001720435s
	[INFO] 10.244.1.2:55436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009185s
	[INFO] 10.244.1.2:57342 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009326s
	[INFO] 10.244.1.2:54076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109267s
	[INFO] 10.244.0.4:39214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088174s
	[INFO] 10.244.2.2:52535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132429s
	[INFO] 10.244.2.2:57308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131665s
	[INFO] 10.244.2.2:55789 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060892s
	[INFO] 10.244.1.2:51494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124082s
	[INFO] 10.244.1.2:52382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214777s
	[INFO] 10.244.1.2:43073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088643s
	[INFO] 10.244.1.2:44985 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084521s
	[INFO] 10.244.0.4:58067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132438s
	[INFO] 10.244.0.4:49916 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000488329s
	[INFO] 10.244.2.2:49651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189629s
	[INFO] 10.244.2.2:55778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106781s
	[INFO] 10.244.1.2:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160687s
	[INFO] 10.244.1.2:44082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162642s
	
	
	==> describe nodes <==
	Name:               ha-929592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_05_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:05:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:08:33 +0000   Sat, 14 Sep 2024 17:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-929592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5487ccf56549d9a2987da2958ebdfe
	  System UUID:                ca5487cc-f565-49d9-a298-7da2958ebdfe
	  Boot ID:                    b416a941-f6c5-4da6-ab3c-4ac7463bcedd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-49mwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 coredns-7c65d6cfc9-66txm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m33s
	  kube-system                 coredns-7c65d6cfc9-dpdz4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m33s
	  kube-system                 etcd-ha-929592                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m37s
	  kube-system                 kindnet-fw757                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m33s
	  kube-system                 kube-apiserver-ha-929592             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kube-controller-manager-ha-929592    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-proxy-6zqmd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-scheduler-ha-929592             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-vip-ha-929592                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m31s  kube-proxy       
	  Normal  Starting                 7m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m37s  kubelet          Node ha-929592 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s  kubelet          Node ha-929592 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s  kubelet          Node ha-929592 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m34s  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal  NodeReady                7m21s  kubelet          Node ha-929592 status is now: NodeReady
	  Normal  RegisteredNode           6m35s  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal  RegisteredNode           5m21s  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	
	
	Name:               ha-929592-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_06_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:06:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:09:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 17:08:26 +0000   Sat, 14 Sep 2024 17:10:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-929592-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba17c21a65b42848fb2de3d914ef47e
	  System UUID:                bba17c21-a65b-4284-8fb2-de3d914ef47e
	  Boot ID:                    a9008c31-c184-44c6-a236-ef722ef0e219
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kvmx7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 etcd-ha-929592-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m42s
	  kube-system                 kindnet-tnjsl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m43s
	  kube-system                 kube-apiserver-ha-929592-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-controller-manager-ha-929592-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-proxy-bcfkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-scheduler-ha-929592-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-vip-ha-929592-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m38s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     6m43s                  cidrAllocator    Node ha-929592-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  6m43s (x8 over 6m43s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x8 over 6m43s)  kubelet          Node ha-929592-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x7 over 6m43s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m39s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           6m35s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  NodeNotReady             2m36s                  node-controller  Node ha-929592-m02 status is now: NodeNotReady
	
	
	Name:               ha-929592-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_07_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:07:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:13:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:08:39 +0000   Sat, 14 Sep 2024 17:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-929592-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bbc24177e214149a9c82a3c54652b96
	  System UUID:                5bbc2417-7e21-4149-a9c8-2a3c54652b96
	  Boot ID:                    1443bf49-c348-4dcc-9582-d986b3eb4cd0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4gtfl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 etcd-ha-929592-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-j7mjh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m29s
	  kube-system                 kube-apiserver-ha-929592-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-controller-manager-ha-929592-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-59tn8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-scheduler-ha-929592-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-929592-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m29s                  cidrAllocator    Node ha-929592-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node ha-929592-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node ha-929592-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node ha-929592-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	
	
	Name:               ha-929592-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_08_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:08:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:13:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:08:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:08:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:08:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:09:29 +0000   Sat, 14 Sep 2024 17:09:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-929592-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b38c12dc6ad945c88a69c031beae5593
	  System UUID:                b38c12dc-6ad9-45c8-8a69-c031beae5593
	  Boot ID:                    e7b0339d-a020-4a02-9bae-4dd87180fa45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x76g8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-proxy-l7g8d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m27s (x2 over 4m27s)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x2 over 4m27s)  kubelet          Node ha-929592-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x2 over 4m27s)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m26s                  cidrAllocator    Node ha-929592-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal  NodeReady                3m38s                  kubelet          Node ha-929592-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep14 17:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051137] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036788] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep14 17:05] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891093] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.559623] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.846823] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.055031] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061916] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.180150] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.131339] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.280240] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.763196] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.977772] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.069092] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951305] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.081826] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.069011] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.752479] kauditd_printk_skb: 31 callbacks suppressed
	[Sep14 17:06] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a] <==
	{"level":"warn","ts":"2024-09-14T17:13:07.235703Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.243544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.249266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.262419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.273056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.281265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.285984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.289694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.296239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.302295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.308400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.308568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.312174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.315357Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.321892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.327824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.334893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.338803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.341715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.345999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.351749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.359066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.381511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.383031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:13:07.409237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:13:07 up 8 min,  0 users,  load average: 0.29, 0.35, 0.20
	Linux ha-929592 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931] <==
	I0914 17:12:36.124776       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:12:46.132291       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:12:46.132405       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:12:46.132562       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:12:46.132673       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:12:46.132769       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:12:46.132790       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:12:46.132865       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:12:46.132884       1 main.go:299] handling current node
	I0914 17:12:56.133521       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:12:56.133812       1 main.go:299] handling current node
	I0914 17:12:56.133851       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:12:56.133871       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:12:56.134075       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:12:56.134099       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:12:56.134164       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:12:56.134182       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:13:06.123570       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:13:06.123729       1 main.go:299] handling current node
	I0914 17:13:06.123774       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:13:06.123794       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:13:06.123964       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:13:06.123988       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:13:06.124048       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:13:06.124069       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00] <==
	I0914 17:05:28.283962       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:05:28.360959       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0914 17:05:28.367666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.54]
	I0914 17:05:28.368556       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 17:05:28.373343       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 17:05:28.680051       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 17:05:30.135829       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 17:05:30.156471       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0914 17:05:30.173207       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 17:05:34.177806       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0914 17:05:34.384683       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0914 17:08:11.520363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52798: use of closed network connection
	E0914 17:08:11.694195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52806: use of closed network connection
	E0914 17:08:12.070905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52830: use of closed network connection
	E0914 17:08:12.256251       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52860: use of closed network connection
	E0914 17:08:12.443924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52876: use of closed network connection
	E0914 17:08:12.639918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52890: use of closed network connection
	E0914 17:08:12.824843       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52904: use of closed network connection
	E0914 17:08:13.006503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52908: use of closed network connection
	E0914 17:08:13.285252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52948: use of closed network connection
	E0914 17:08:13.460709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52966: use of closed network connection
	E0914 17:08:13.647974       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52986: use of closed network connection
	E0914 17:08:13.836122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53010: use of closed network connection
	E0914 17:08:14.208702       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53052: use of closed network connection
	W0914 17:09:58.389176       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39 192.168.39.54]
	
	
	==> kube-controller-manager [363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c] <==
	E0914 17:08:41.186992       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-929592-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-929592-m04"
	E0914 17:08:41.187053       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-929592-m04': failed to patch node CIDR: Node \"ha-929592-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0914 17:08:41.187117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.190381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.192704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.614378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:41.901831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:42.647216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:42.739995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:43.473186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:43.474308       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-929592-m04"
	I0914 17:08:43.516355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:08:51.290158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:11.731747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:29.404151       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-929592-m04"
	I0914 17:09:29.404271       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:29.419134       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:09:31.811684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:10:31.840886       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-929592-m04"
	I0914 17:10:31.841208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:10:31.860016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:10:31.882928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.524786ms"
	I0914 17:10:31.883642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.211µs"
	I0914 17:10:33.526882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:10:37.164911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	
	
	==> kube-proxy [c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:05:35.310113       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:05:35.359217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0914 17:05:35.359342       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:05:35.435955       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:05:35.436015       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:05:35.436044       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:05:35.449663       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:05:35.452038       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:05:35.452091       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:05:35.454722       1 config.go:199] "Starting service config controller"
	I0914 17:05:35.455148       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:05:35.455408       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:05:35.455433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:05:35.456374       1 config.go:328] "Starting node config controller"
	I0914 17:05:35.456414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:05:35.556032       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:05:35.556124       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:05:35.556760       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb] <==
	I0914 17:08:41.224233       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lhrb9" node="ha-929592-m04"
	E0914 17:08:41.260975       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bkp56\": pod kindnet-bkp56 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bkp56" node="ha-929592-m04"
	E0914 17:08:41.261124       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 25f166f1-e3c8-47e5-808f-f7057f6dd633(kube-system/kindnet-bkp56) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bkp56"
	E0914 17:08:41.261165       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bkp56\": pod kindnet-bkp56 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kindnet-bkp56"
	I0914 17:08:41.261207       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bkp56" node="ha-929592-m04"
	E0914 17:08:41.270235       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-skw76\": pod kube-proxy-skw76 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-skw76" node="ha-929592-m04"
	E0914 17:08:41.270537       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c4480281-6939-4653-9697-9041a678e870(kube-system/kube-proxy-skw76) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-skw76"
	E0914 17:08:41.270636       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-skw76\": pod kube-proxy-skw76 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-skw76"
	I0914 17:08:41.270678       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-skw76" node="ha-929592-m04"
	E0914 17:08:42.972713       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-phnll\": pod kube-proxy-phnll is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-phnll" node="ha-929592-m04"
	E0914 17:08:42.972802       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-phnll\": pod kube-proxy-phnll is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-phnll"
	E0914 17:08:42.973360       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:42.977406       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ae77fbbd-0eba-4e1d-add0-d894e73795c1(kube-system/kube-proxy-ll6r9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ll6r9"
	E0914 17:08:42.977758       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-ll6r9"
	I0914 17:08:42.977890       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:44.830679       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lrzhr" node="ha-929592-m04"
	E0914 17:08:44.830996       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-lrzhr"
	E0914 17:08:44.831750       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 858b1075-344d-4b2d-baed-8eea46a2f708(kube-system/kube-proxy-thwhv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-thwhv"
	E0914 17:08:44.837157       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-thwhv"
	I0914 17:08:44.837232       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837022       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	E0914 17:08:44.839305       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bdb91643-a0e4-4162-aeb3-0d94749f04df(kube-system/kube-proxy-l7g8d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-l7g8d"
	E0914 17:08:44.839486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-l7g8d"
	I0914 17:08:44.839536       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	
	
	==> kubelet <==
	Sep 14 17:11:30 ha-929592 kubelet[1305]: E0914 17:11:30.190859    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333890190380736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:30 ha-929592 kubelet[1305]: E0914 17:11:30.191027    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333890190380736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:40 ha-929592 kubelet[1305]: E0914 17:11:40.193360    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333900192824251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:40 ha-929592 kubelet[1305]: E0914 17:11:40.193807    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333900192824251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:50 ha-929592 kubelet[1305]: E0914 17:11:50.196749    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333910195877198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:11:50 ha-929592 kubelet[1305]: E0914 17:11:50.197231    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333910195877198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:00 ha-929592 kubelet[1305]: E0914 17:12:00.199819    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333920199330262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:00 ha-929592 kubelet[1305]: E0914 17:12:00.199882    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333920199330262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:10 ha-929592 kubelet[1305]: E0914 17:12:10.201533    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930201009794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:10 ha-929592 kubelet[1305]: E0914 17:12:10.201610    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333930201009794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:20 ha-929592 kubelet[1305]: E0914 17:12:20.203308    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333940202944621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:20 ha-929592 kubelet[1305]: E0914 17:12:20.203359    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333940202944621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:30 ha-929592 kubelet[1305]: E0914 17:12:30.079870    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:12:30 ha-929592 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:12:30 ha-929592 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:12:30 ha-929592 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:12:30 ha-929592 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:12:30 ha-929592 kubelet[1305]: E0914 17:12:30.205478    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333950205161882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:30 ha-929592 kubelet[1305]: E0914 17:12:30.205502    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333950205161882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:40 ha-929592 kubelet[1305]: E0914 17:12:40.207565    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333960207183867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:40 ha-929592 kubelet[1305]: E0914 17:12:40.207668    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333960207183867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:50 ha-929592 kubelet[1305]: E0914 17:12:50.209154    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333970208784041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:12:50 ha-929592 kubelet[1305]: E0914 17:12:50.209219    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333970208784041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:13:00 ha-929592 kubelet[1305]: E0914 17:13:00.210737    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333980210357526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:13:00 ha-929592 kubelet[1305]: E0914 17:13:00.210770    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726333980210357526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-929592 -n ha-929592
helpers_test.go:261: (dbg) Run:  kubectl --context ha-929592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (349.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-929592 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-929592 -v=7 --alsologtostderr
E0914 17:14:04.946882   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:14:32.649670   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-929592 -v=7 --alsologtostderr: exit status 82 (2m1.788090776s)

                                                
                                                
-- stdout --
	* Stopping node "ha-929592-m04"  ...
	* Stopping node "ha-929592-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:13:08.811586   33307 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:13:08.811818   33307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:13:08.811826   33307 out.go:358] Setting ErrFile to fd 2...
	I0914 17:13:08.811830   33307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:13:08.812034   33307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:13:08.812253   33307 out.go:352] Setting JSON to false
	I0914 17:13:08.812340   33307 mustload.go:65] Loading cluster: ha-929592
	I0914 17:13:08.812715   33307 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:13:08.812795   33307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:13:08.812961   33307 mustload.go:65] Loading cluster: ha-929592
	I0914 17:13:08.813093   33307 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:13:08.813122   33307 stop.go:39] StopHost: ha-929592-m04
	I0914 17:13:08.813483   33307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:08.813525   33307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:08.828691   33307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0914 17:13:08.829194   33307 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:08.829743   33307 main.go:141] libmachine: Using API Version  1
	I0914 17:13:08.829765   33307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:08.830058   33307 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:08.832733   33307 out.go:177] * Stopping node "ha-929592-m04"  ...
	I0914 17:13:08.834137   33307 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 17:13:08.834202   33307 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:13:08.834418   33307 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 17:13:08.834441   33307 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:13:08.837130   33307 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:13:08.837604   33307 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:08:29 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:13:08.837628   33307 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:13:08.837809   33307 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:13:08.837972   33307 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:13:08.838099   33307 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:13:08.838278   33307 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:13:08.920986   33307 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 17:13:08.974879   33307 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 17:13:09.028238   33307 main.go:141] libmachine: Stopping "ha-929592-m04"...
	I0914 17:13:09.028304   33307 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:13:09.029860   33307 main.go:141] libmachine: (ha-929592-m04) Calling .Stop
	I0914 17:13:09.033330   33307 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 0/120
	I0914 17:13:10.130803   33307 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:13:10.132135   33307 main.go:141] libmachine: Machine "ha-929592-m04" was stopped.
	I0914 17:13:10.132150   33307 stop.go:75] duration metric: took 1.298015912s to stop
	I0914 17:13:10.132168   33307 stop.go:39] StopHost: ha-929592-m03
	I0914 17:13:10.132460   33307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:13:10.132503   33307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:13:10.148152   33307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0914 17:13:10.148569   33307 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:13:10.149066   33307 main.go:141] libmachine: Using API Version  1
	I0914 17:13:10.149097   33307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:13:10.149425   33307 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:13:10.151189   33307 out.go:177] * Stopping node "ha-929592-m03"  ...
	I0914 17:13:10.152213   33307 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 17:13:10.152234   33307 main.go:141] libmachine: (ha-929592-m03) Calling .DriverName
	I0914 17:13:10.152453   33307 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 17:13:10.152486   33307 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHHostname
	I0914 17:13:10.155809   33307 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:13:10.156266   33307 main.go:141] libmachine: (ha-929592-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:df:f1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:07:04 +0000 UTC Type:0 Mac:52:54:00:49:df:f1 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-929592-m03 Clientid:01:52:54:00:49:df:f1}
	I0914 17:13:10.156292   33307 main.go:141] libmachine: (ha-929592-m03) DBG | domain ha-929592-m03 has defined IP address 192.168.39.39 and MAC address 52:54:00:49:df:f1 in network mk-ha-929592
	I0914 17:13:10.156544   33307 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHPort
	I0914 17:13:10.156705   33307 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHKeyPath
	I0914 17:13:10.156886   33307 main.go:141] libmachine: (ha-929592-m03) Calling .GetSSHUsername
	I0914 17:13:10.157022   33307 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m03/id_rsa Username:docker}
	I0914 17:13:10.250525   33307 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 17:13:10.304092   33307 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 17:13:10.358299   33307 main.go:141] libmachine: Stopping "ha-929592-m03"...
	I0914 17:13:10.358350   33307 main.go:141] libmachine: (ha-929592-m03) Calling .GetState
	I0914 17:13:10.359888   33307 main.go:141] libmachine: (ha-929592-m03) Calling .Stop
	I0914 17:13:10.363148   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 0/120
	I0914 17:13:11.364437   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 1/120
	I0914 17:13:12.365892   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 2/120
	I0914 17:13:13.367216   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 3/120
	I0914 17:13:14.369048   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 4/120
	I0914 17:13:15.370857   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 5/120
	I0914 17:13:16.372620   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 6/120
	I0914 17:13:17.374417   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 7/120
	I0914 17:13:18.376021   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 8/120
	I0914 17:13:19.377719   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 9/120
	I0914 17:13:20.379735   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 10/120
	I0914 17:13:21.381048   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 11/120
	I0914 17:13:22.382853   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 12/120
	I0914 17:13:23.384116   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 13/120
	I0914 17:13:24.385619   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 14/120
	I0914 17:13:25.387614   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 15/120
	I0914 17:13:26.389338   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 16/120
	I0914 17:13:27.390658   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 17/120
	I0914 17:13:28.392376   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 18/120
	I0914 17:13:29.393760   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 19/120
	I0914 17:13:30.396136   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 20/120
	I0914 17:13:31.397306   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 21/120
	I0914 17:13:32.399025   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 22/120
	I0914 17:13:33.400327   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 23/120
	I0914 17:13:34.402232   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 24/120
	I0914 17:13:35.404069   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 25/120
	I0914 17:13:36.405692   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 26/120
	I0914 17:13:37.407383   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 27/120
	I0914 17:13:38.409284   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 28/120
	I0914 17:13:39.410936   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 29/120
	I0914 17:13:40.412981   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 30/120
	I0914 17:13:41.414353   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 31/120
	I0914 17:13:42.416235   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 32/120
	I0914 17:13:43.417972   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 33/120
	I0914 17:13:44.419391   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 34/120
	I0914 17:13:45.421575   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 35/120
	I0914 17:13:46.423089   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 36/120
	I0914 17:13:47.424583   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 37/120
	I0914 17:13:48.426147   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 38/120
	I0914 17:13:49.427715   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 39/120
	I0914 17:13:50.429695   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 40/120
	I0914 17:13:51.431079   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 41/120
	I0914 17:13:52.432757   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 42/120
	I0914 17:13:53.434212   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 43/120
	I0914 17:13:54.435800   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 44/120
	I0914 17:13:55.437561   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 45/120
	I0914 17:13:56.438989   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 46/120
	I0914 17:13:57.440462   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 47/120
	I0914 17:13:58.441751   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 48/120
	I0914 17:13:59.442960   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 49/120
	I0914 17:14:00.444832   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 50/120
	I0914 17:14:01.446145   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 51/120
	I0914 17:14:02.447386   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 52/120
	I0914 17:14:03.448642   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 53/120
	I0914 17:14:04.450067   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 54/120
	I0914 17:14:05.452126   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 55/120
	I0914 17:14:06.453647   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 56/120
	I0914 17:14:07.455083   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 57/120
	I0914 17:14:08.456835   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 58/120
	I0914 17:14:09.458194   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 59/120
	I0914 17:14:10.460098   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 60/120
	I0914 17:14:11.461384   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 61/120
	I0914 17:14:12.463094   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 62/120
	I0914 17:14:13.464498   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 63/120
	I0914 17:14:14.465768   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 64/120
	I0914 17:14:15.467545   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 65/120
	I0914 17:14:16.468971   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 66/120
	I0914 17:14:17.470393   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 67/120
	I0914 17:14:18.471751   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 68/120
	I0914 17:14:19.473153   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 69/120
	I0914 17:14:20.475191   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 70/120
	I0914 17:14:21.476470   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 71/120
	I0914 17:14:22.477837   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 72/120
	I0914 17:14:23.479204   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 73/120
	I0914 17:14:24.480696   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 74/120
	I0914 17:14:25.482431   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 75/120
	I0914 17:14:26.483773   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 76/120
	I0914 17:14:27.485119   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 77/120
	I0914 17:14:28.486545   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 78/120
	I0914 17:14:29.488570   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 79/120
	I0914 17:14:30.490339   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 80/120
	I0914 17:14:31.491829   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 81/120
	I0914 17:14:32.493197   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 82/120
	I0914 17:14:33.494504   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 83/120
	I0914 17:14:34.496561   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 84/120
	I0914 17:14:35.498490   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 85/120
	I0914 17:14:36.499690   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 86/120
	I0914 17:14:37.501250   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 87/120
	I0914 17:14:38.502527   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 88/120
	I0914 17:14:39.503911   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 89/120
	I0914 17:14:40.505637   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 90/120
	I0914 17:14:41.507054   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 91/120
	I0914 17:14:42.508305   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 92/120
	I0914 17:14:43.509488   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 93/120
	I0914 17:14:44.510742   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 94/120
	I0914 17:14:45.512384   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 95/120
	I0914 17:14:46.513826   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 96/120
	I0914 17:14:47.515134   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 97/120
	I0914 17:14:48.516550   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 98/120
	I0914 17:14:49.517749   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 99/120
	I0914 17:14:50.519269   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 100/120
	I0914 17:14:51.520586   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 101/120
	I0914 17:14:52.521871   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 102/120
	I0914 17:14:53.523092   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 103/120
	I0914 17:14:54.524563   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 104/120
	I0914 17:14:55.526352   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 105/120
	I0914 17:14:56.527689   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 106/120
	I0914 17:14:57.529971   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 107/120
	I0914 17:14:58.531490   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 108/120
	I0914 17:14:59.532831   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 109/120
	I0914 17:15:00.534824   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 110/120
	I0914 17:15:01.536568   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 111/120
	I0914 17:15:02.538032   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 112/120
	I0914 17:15:03.539648   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 113/120
	I0914 17:15:04.541168   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 114/120
	I0914 17:15:05.543210   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 115/120
	I0914 17:15:06.544633   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 116/120
	I0914 17:15:07.546209   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 117/120
	I0914 17:15:08.547647   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 118/120
	I0914 17:15:09.548902   33307 main.go:141] libmachine: (ha-929592-m03) Waiting for machine to stop 119/120
	I0914 17:15:10.549704   33307 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 17:15:10.549758   33307 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 17:15:10.551761   33307 out.go:201] 
	W0914 17:15:10.553023   33307 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 17:15:10.553046   33307 out.go:270] * 
	* 
	W0914 17:15:10.555466   33307 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 17:15:10.556732   33307 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-929592 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-929592 --wait=true -v=7 --alsologtostderr
E0914 17:16:45.625657   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:18:08.691094   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-929592 --wait=true -v=7 --alsologtostderr: (3m45.189417551s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-929592
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-929592 -n ha-929592
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-929592 logs -n 25: (2.038103818s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m04 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp testdata/cp-test.txt                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592:/home/docker/cp-test_ha-929592-m04_ha-929592.txt                      |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592 sudo cat                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592.txt                                |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03:/home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m03 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-929592 node stop m02 -v=7                                                    | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-929592 node start m02 -v=7                                                   | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-929592 -v=7                                                          | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-929592 -v=7                                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-929592 --wait=true -v=7                                                   | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:15 UTC | 14 Sep 24 17:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-929592                                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:18 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:15:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:15:10.602753   33797 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:15:10.602849   33797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:15:10.602854   33797 out.go:358] Setting ErrFile to fd 2...
	I0914 17:15:10.602858   33797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:15:10.603035   33797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:15:10.603549   33797 out.go:352] Setting JSON to false
	I0914 17:15:10.604450   33797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3455,"bootTime":1726330656,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:15:10.604539   33797 start.go:139] virtualization: kvm guest
	I0914 17:15:10.606694   33797 out.go:177] * [ha-929592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:15:10.607843   33797 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:15:10.607848   33797 notify.go:220] Checking for updates...
	I0914 17:15:10.610014   33797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:15:10.611077   33797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:15:10.612167   33797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:15:10.613267   33797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:15:10.614470   33797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:15:10.616166   33797 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:15:10.616290   33797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:15:10.616765   33797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:15:10.616809   33797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:15:10.634573   33797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0914 17:15:10.635108   33797 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:15:10.635658   33797 main.go:141] libmachine: Using API Version  1
	I0914 17:15:10.635677   33797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:15:10.636064   33797 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:15:10.636258   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:15:10.672119   33797 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 17:15:10.673127   33797 start.go:297] selected driver: kvm2
	I0914 17:15:10.673140   33797 start.go:901] validating driver "kvm2" against &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:15:10.673277   33797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:15:10.673618   33797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:15:10.673694   33797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:15:10.689269   33797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:15:10.689980   33797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:15:10.690018   33797 cni.go:84] Creating CNI manager for ""
	I0914 17:15:10.690064   33797 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 17:15:10.690122   33797 start.go:340] cluster config:
	{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:15:10.690298   33797 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:15:10.691995   33797 out.go:177] * Starting "ha-929592" primary control-plane node in "ha-929592" cluster
	I0914 17:15:10.692880   33797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:15:10.692923   33797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 17:15:10.692930   33797 cache.go:56] Caching tarball of preloaded images
	I0914 17:15:10.693013   33797 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:15:10.693026   33797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:15:10.693156   33797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:15:10.693347   33797 start.go:360] acquireMachinesLock for ha-929592: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:15:10.693391   33797 start.go:364] duration metric: took 26.138µs to acquireMachinesLock for "ha-929592"
	I0914 17:15:10.693409   33797 start.go:96] Skipping create...Using existing machine configuration
	I0914 17:15:10.693423   33797 fix.go:54] fixHost starting: 
	I0914 17:15:10.693699   33797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:15:10.693736   33797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:15:10.709624   33797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0914 17:15:10.709986   33797 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:15:10.710523   33797 main.go:141] libmachine: Using API Version  1
	I0914 17:15:10.710551   33797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:15:10.710858   33797 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:15:10.711072   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:15:10.711185   33797 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:15:10.712907   33797 fix.go:112] recreateIfNeeded on ha-929592: state=Running err=<nil>
	W0914 17:15:10.712942   33797 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 17:15:10.714848   33797 out.go:177] * Updating the running kvm2 "ha-929592" VM ...
	I0914 17:15:10.715854   33797 machine.go:93] provisionDockerMachine start ...
	I0914 17:15:10.715875   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:15:10.716054   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:10.718601   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.719090   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:10.719111   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.719254   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:10.719412   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.719559   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.719672   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:10.719819   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:10.720047   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:10.720062   33797 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 17:15:10.838978   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592
	
	I0914 17:15:10.839012   33797 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:15:10.839228   33797 buildroot.go:166] provisioning hostname "ha-929592"
	I0914 17:15:10.839250   33797 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:15:10.839408   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:10.841950   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.842336   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:10.842365   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.842479   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:10.842637   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.842752   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.842840   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:10.842946   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:10.843182   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:10.843204   33797 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592 && echo "ha-929592" | sudo tee /etc/hostname
	I0914 17:15:10.973633   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592
	
	I0914 17:15:10.973660   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:10.976238   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.976669   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:10.976697   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.976880   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:10.977071   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.977229   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.977344   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:10.977532   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:10.977718   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:10.977739   33797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:15:11.090972   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:15:11.090995   33797 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:15:11.091028   33797 buildroot.go:174] setting up certificates
	I0914 17:15:11.091036   33797 provision.go:84] configureAuth start
	I0914 17:15:11.091046   33797 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:15:11.091295   33797 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:15:11.093895   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.094260   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.094299   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.094430   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:11.096417   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.096750   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.096769   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.096859   33797 provision.go:143] copyHostCerts
	I0914 17:15:11.096900   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:15:11.096932   33797 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:15:11.096941   33797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:15:11.097003   33797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:15:11.097083   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:15:11.097104   33797 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:15:11.097108   33797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:15:11.097132   33797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:15:11.097172   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:15:11.097188   33797 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:15:11.097192   33797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:15:11.097217   33797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:15:11.097261   33797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592 san=[127.0.0.1 192.168.39.54 ha-929592 localhost minikube]
	I0914 17:15:11.263507   33797 provision.go:177] copyRemoteCerts
	I0914 17:15:11.263571   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:15:11.263593   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:11.266211   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.266533   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.266571   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.266695   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:11.266848   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:11.266968   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:11.267069   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:15:11.352949   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:15:11.353011   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:15:11.376038   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:15:11.376128   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0914 17:15:11.400362   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:15:11.400429   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:15:11.425531   33797 provision.go:87] duration metric: took 334.483325ms to configureAuth
	I0914 17:15:11.425561   33797 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:15:11.425778   33797 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:15:11.425862   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:11.428475   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.428870   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.428895   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.429127   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:11.429294   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:11.429503   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:11.429646   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:11.429874   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:11.430064   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:11.430082   33797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:16:42.114577   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:16:42.114621   33797 machine.go:96] duration metric: took 1m31.398754249s to provisionDockerMachine
	I0914 17:16:42.114634   33797 start.go:293] postStartSetup for "ha-929592" (driver="kvm2")
	I0914 17:16:42.114648   33797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:16:42.114674   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.114982   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:16:42.115009   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.118220   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.118791   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.118818   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.119088   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.119254   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.119403   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.119539   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:16:42.209845   33797 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:16:42.214105   33797 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:16:42.214135   33797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:16:42.214213   33797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:16:42.214306   33797 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:16:42.214315   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:16:42.214400   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:16:42.223501   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:16:42.245903   33797 start.go:296] duration metric: took 131.252389ms for postStartSetup
	I0914 17:16:42.245942   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.246240   33797 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0914 17:16:42.246272   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.248809   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.249260   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.249281   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.249454   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.249660   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.249812   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.249947   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	W0914 17:16:42.336372   33797 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0914 17:16:42.336398   33797 fix.go:56] duration metric: took 1m31.642974944s for fixHost
	I0914 17:16:42.336420   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.339350   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.339840   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.339862   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.340052   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.340265   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.340399   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.340516   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.340670   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:16:42.340875   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:16:42.340890   33797 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:16:42.454511   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726334202.418081292
	
	I0914 17:16:42.454533   33797 fix.go:216] guest clock: 1726334202.418081292
	I0914 17:16:42.454541   33797 fix.go:229] Guest: 2024-09-14 17:16:42.418081292 +0000 UTC Remote: 2024-09-14 17:16:42.336405227 +0000 UTC m=+91.769197256 (delta=81.676065ms)
	I0914 17:16:42.454576   33797 fix.go:200] guest clock delta is within tolerance: 81.676065ms
	I0914 17:16:42.454581   33797 start.go:83] releasing machines lock for "ha-929592", held for 1m31.76118071s
	I0914 17:16:42.454600   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.454845   33797 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:16:42.457270   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.457846   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.457869   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.458066   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.458663   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.458831   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.458929   33797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:16:42.458970   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.459028   33797 ssh_runner.go:195] Run: cat /version.json
	I0914 17:16:42.459053   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.461742   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462045   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462224   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.462250   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462397   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.462411   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.462423   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462572   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.462590   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.462729   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.462752   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.462811   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:16:42.462904   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.463011   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:16:42.543338   33797 ssh_runner.go:195] Run: systemctl --version
	I0914 17:16:42.579305   33797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:16:42.745322   33797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:16:42.752599   33797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:16:42.752663   33797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:16:42.761511   33797 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 17:16:42.761531   33797 start.go:495] detecting cgroup driver to use...
	I0914 17:16:42.761592   33797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:16:42.777948   33797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:16:42.792470   33797 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:16:42.792531   33797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:16:42.806346   33797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:16:42.820060   33797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:16:42.971162   33797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:16:43.121104   33797 docker.go:233] disabling docker service ...
	I0914 17:16:43.121170   33797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:16:43.137672   33797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:16:43.151261   33797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:16:43.296068   33797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:16:43.444321   33797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:16:43.474544   33797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:16:43.492833   33797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:16:43.492895   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.503326   33797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:16:43.503397   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.513553   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.523501   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.533609   33797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:16:43.543625   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.553720   33797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.564651   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.574688   33797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:16:43.583809   33797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:16:43.592901   33797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:16:43.735079   33797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:16:50.498812   33797 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.763701185s)
	I0914 17:16:50.498837   33797 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:16:50.498878   33797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:16:50.504217   33797 start.go:563] Will wait 60s for crictl version
	I0914 17:16:50.504267   33797 ssh_runner.go:195] Run: which crictl
	I0914 17:16:50.507855   33797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:16:50.550085   33797 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:16:50.550152   33797 ssh_runner.go:195] Run: crio --version
	I0914 17:16:50.578849   33797 ssh_runner.go:195] Run: crio --version
	I0914 17:16:50.607421   33797 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:16:50.608673   33797 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:16:50.611777   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:50.612196   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:50.612223   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:50.612421   33797 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:16:50.616787   33797 kubeadm.go:883] updating cluster {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:16:50.616935   33797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:16:50.616988   33797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:16:50.661040   33797 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:16:50.661062   33797 crio.go:433] Images already preloaded, skipping extraction
	I0914 17:16:50.661116   33797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:16:50.697632   33797 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:16:50.697654   33797 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:16:50.697662   33797 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0914 17:16:50.697809   33797 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:16:50.697891   33797 ssh_runner.go:195] Run: crio config
	I0914 17:16:50.744713   33797 cni.go:84] Creating CNI manager for ""
	I0914 17:16:50.744734   33797 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 17:16:50.744749   33797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:16:50.744769   33797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-929592 NodeName:ha-929592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:16:50.744895   33797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-929592"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:16:50.744915   33797 kube-vip.go:115] generating kube-vip config ...
	I0914 17:16:50.744955   33797 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:16:50.756057   33797 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:16:50.756221   33797 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:16:50.756282   33797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:16:50.766084   33797 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:16:50.766256   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 17:16:50.775698   33797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0914 17:16:50.793878   33797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:16:50.810872   33797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0914 17:16:50.827707   33797 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 17:16:50.843961   33797 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:16:50.848904   33797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:16:51.000864   33797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:16:51.015424   33797 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.54
	I0914 17:16:51.015452   33797 certs.go:194] generating shared ca certs ...
	I0914 17:16:51.015468   33797 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:16:51.015647   33797 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:16:51.015694   33797 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:16:51.015705   33797 certs.go:256] generating profile certs ...
	I0914 17:16:51.015824   33797 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:16:51.015857   33797 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3
	I0914 17:16:51.015871   33797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.148 192.168.39.39 192.168.39.254]
	I0914 17:16:51.226810   33797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3 ...
	I0914 17:16:51.226840   33797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3: {Name:mk49551671edffb505318317557bb2d26c619ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:16:51.227032   33797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3 ...
	I0914 17:16:51.227047   33797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3: {Name:mkcec55d318b985531b1667f704cc2b12d9e93c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:16:51.227144   33797 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:16:51.227292   33797 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:16:51.227439   33797 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:16:51.227454   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:16:51.227466   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:16:51.227484   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:16:51.227497   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:16:51.227508   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:16:51.227520   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:16:51.227535   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:16:51.227547   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:16:51.227587   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:16:51.227617   33797 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:16:51.227627   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:16:51.227649   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:16:51.227678   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:16:51.227701   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:16:51.227736   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:16:51.227768   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.227779   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.227788   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.228324   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:16:51.254005   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:16:51.278983   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:16:51.302603   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:16:51.326139   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 17:16:51.349921   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:16:51.374639   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:16:51.399584   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:16:51.424156   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:16:51.449012   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:16:51.474172   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:16:51.498590   33797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:16:51.514922   33797 ssh_runner.go:195] Run: openssl version
	I0914 17:16:51.520852   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:16:51.532308   33797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.536588   33797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.536643   33797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.542469   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:16:51.552925   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:16:51.563415   33797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.567698   33797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.567749   33797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.573285   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:16:51.582388   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:16:51.592689   33797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.596946   33797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.597008   33797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.602219   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:16:51.611369   33797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:16:51.615534   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 17:16:51.621039   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 17:16:51.626208   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 17:16:51.631302   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 17:16:51.636949   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 17:16:51.641942   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 17:16:51.647577   33797 kubeadm.go:392] StartCluster: {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:16:51.647687   33797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:16:51.647740   33797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:16:51.684650   33797 cri.go:89] found id: "c502bdacde6a009b8e37ac816ac0d18a8c294173ed43571a08f6a3fb9872a029"
	I0914 17:16:51.684670   33797 cri.go:89] found id: "633f9a7a14ee23e2b2563bcf87fe984a400cda9e672a9b4139a99b35379778dc"
	I0914 17:16:51.684674   33797 cri.go:89] found id: "2043f3cb542985d356c0d6c975b5e4a1045314ef85fa2f34f938e81e0b7bcc5a"
	I0914 17:16:51.684677   33797 cri.go:89] found id: "bf42a0f089bcb4101b354ccb3043ff584fbe5acbcec991c7c6f00fbc21db5dd7"
	I0914 17:16:51.684680   33797 cri.go:89] found id: "9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17"
	I0914 17:16:51.684683   33797 cri.go:89] found id: "06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f"
	I0914 17:16:51.684685   33797 cri.go:89] found id: "fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931"
	I0914 17:16:51.684687   33797 cri.go:89] found id: "c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849"
	I0914 17:16:51.684689   33797 cri.go:89] found id: "7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8"
	I0914 17:16:51.684695   33797 cri.go:89] found id: "ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a"
	I0914 17:16:51.684697   33797 cri.go:89] found id: "972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb"
	I0914 17:16:51.684703   33797 cri.go:89] found id: "ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00"
	I0914 17:16:51.684708   33797 cri.go:89] found id: "363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c"
	I0914 17:16:51.684711   33797 cri.go:89] found id: ""
	I0914 17:16:51.684747   33797 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.550459846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ec23167-f7da-4650-ba9e-2285e56f0b7a name=/runtime.v1.RuntimeService/Version
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.551780955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a216861e-6fca-4b77-b374-3113ec52e4e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.552222323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334336552196229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a216861e-6fca-4b77-b374-3113ec52e4e7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.552847456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0526d245-60b4-4097-9e9b-02b397d50967 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.552921804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0526d245-60b4-4097-9e9b-02b397d50967 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.554001809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929
592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9506
5ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdc
fc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3
be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60
c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5
b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
,State:CONTAINER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0526d245-60b4-4097-9e9b-02b397d50967 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.600513253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d25cb48-c1f4-4aba-8160-e4bf3bd0fa58 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.600630195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d25cb48-c1f4-4aba-8160-e4bf3bd0fa58 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.602195888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c97e8157-7ba0-43cc-a389-4d3e4cb6d856 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.602653694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334336602624247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c97e8157-7ba0-43cc-a389-4d3e4cb6d856 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.603238769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9adde002-41e8-418f-b9b5-aa3209390fd7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.603336793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9adde002-41e8-418f-b9b5-aa3209390fd7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.603861892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929
592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9506
5ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdc
fc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3
be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60
c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5
b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
,State:CONTAINER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9adde002-41e8-418f-b9b5-aa3209390fd7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.647908248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e8ac38f-352b-4587-8df7-e859af10baae name=/runtime.v1.RuntimeService/Version
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.647997613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e8ac38f-352b-4587-8df7-e859af10baae name=/runtime.v1.RuntimeService/Version
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.649041693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceb95291-05d8-420f-b579-c826df00aaa6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.649462496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334336649439697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceb95291-05d8-420f-b579-c826df00aaa6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.650068739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b874251-4794-43d2-936f-1305c2bab039 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.650144028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b874251-4794-43d2-936f-1305c2bab039 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.650516231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929
592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9506
5ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdc
fc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3
be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60
c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5
b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
,State:CONTAINER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b874251-4794-43d2-936f-1305c2bab039 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.690996209Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=eb21d46e-5fb4-41e4-9830-5960f9f484fa name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.691381887Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-49mwg,Uid:9f3ed79c-66ac-429d-bbd6-4956eab3be98,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334251209478479,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:08:06.494431652Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-929592,Uid:517e581b944b0c79eed2314533ce0ca8,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726334232106323877,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{kubernetes.io/config.hash: 517e581b944b0c79eed2314533ce0ca8,kubernetes.io/config.seen: 2024-09-14T17:16:50.808417264Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-66txm,Uid:abf3ed52-ab5a-4415-a8a9-78e567d60348,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217569801244,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-14T17:05:46.281501018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpdz4,Uid:2a751c8d-890c-402e-846f-8f61e3fd1965,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217566817117,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:46.296104407Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-929592,Uid:a3520d0a4b75398d9e9e72bfdcfc4f4f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217554332779,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.54:8443,kubernetes.io/config.hash: a3520d0a4b75398d9e9e72bfdcfc4f4f,kubernetes.io/config.seen: 2024-09-14T17:05:30.022819604Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&PodSandboxMetadata{Name:etcd-ha-929592,Uid:d7c84dd075d4f7e4fd5febc189940f4e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217552129689,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,tier: control-plane,},Annotations:map[string]st
ring{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.54:2379,kubernetes.io/config.hash: d7c84dd075d4f7e4fd5febc189940f4e,kubernetes.io/config.seen: 2024-09-14T17:05:30.022818238Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-929592,Uid:21e24f7df5d7099b0f0b2dba49446d51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217552050934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 21e24f7df5d7099b0f0b2dba49446d51,kubernetes.io/config.seen: 2024-09-14T17:05:30.022820563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b99429327f50f
00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&PodSandboxMetadata{Name:kube-proxy-6zqmd,Uid:b7beddc8-ce6a-44ed-b3e8-423baf620bbb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217510305950,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:34.218501479Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4f486484-9641-4e23-8bc9-4dcae57b621a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217506311982,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T17:05:46.291789176Z,kubernetes.io/config.source: api,},RuntimeHand
ler:,},&PodSandbox{Id:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&PodSandboxMetadata{Name:kindnet-fw757,Uid:51a38d95-fd50-4c05-a75d-a3dfeae127bd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217466306171,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:34.227338304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-929592,Uid:95065ad67a4f1610671e72fcaed57954,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217464185126,Labels:map[string]string{component: kube-scheduler,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95065ad67a4f1610671e72fcaed57954,kubernetes.io/config.seen: 2024-09-14T17:05:30.022812287Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-49mwg,Uid:9f3ed79c-66ac-429d-bbd6-4956eab3be98,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333686815953243,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:08:06.494431652Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpdz4,Uid:2a751c8d-890c-402e-846f-8f61e3fd1965,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333546607094377,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:46.296104407Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-66txm,Uid:abf3ed52-ab5a-4415-a8a9-78e567d60348,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333546592389893,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:46.281501018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&PodSandboxMetadata{Name:kindnet-fw757,Uid:51a38d95-fd50-4c05-a75d-a3dfeae127bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333534546390290,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:34.227338304Z,kubernetes.io/config.source: api,},Runtim
eHandler:,},&PodSandbox{Id:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&PodSandboxMetadata{Name:kube-proxy-6zqmd,Uid:b7beddc8-ce6a-44ed-b3e8-423baf620bbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333534542961746,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:34.218501479Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-929592,Uid:95065ad67a4f1610671e72fcaed57954,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333523687243934,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95065ad67a4f1610671e72fcaed57954,kubernetes.io/config.seen: 2024-09-14T17:05:23.168492417Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&PodSandboxMetadata{Name:etcd-ha-929592,Uid:d7c84dd075d4f7e4fd5febc189940f4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726333523633171546,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.54:2379,kubernetes.io/config.hash: d7c84dd075d
4f7e4fd5febc189940f4e,kubernetes.io/config.seen: 2024-09-14T17:05:23.168480593Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=eb21d46e-5fb4-41e4-9830-5960f9f484fa name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.692261994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e34f085-d23c-49f3-8f40-3304c4bb7aa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.692329745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e34f085-d23c-49f3-8f40-3304c4bb7aa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:18:56 ha-929592 crio[3714]: time="2024-09-14 17:18:56.692892554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929
592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9506
5ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdc
fc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3
be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.contain
er.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60
c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5
b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
,State:CONTAINER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e34f085-d23c-49f3-8f40-3304c4bb7aa3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1e3dd648bccf3       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   b273a10472206       busybox-7dff88458-49mwg
	876c20b82135d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   6e984099257ad       kube-controller-manager-ha-929592
	451a416ccbf4e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   6d35c1613fd15       kube-apiserver-ha-929592
	59ca6347386b4       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      About a minute ago   Running             kube-vip                  0                   bc3029e83ceac       kube-vip-ha-929592
	ab7a7b73a44c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   b99429327f50f       kube-proxy-6zqmd
	df6e98168d3f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       5                   0cc8e7f5a7b7a       storage-provisioner
	f4b6294601181       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   928ea8de33905       kindnet-fw757
	429725720c04d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   4564189eeba3b       coredns-7c65d6cfc9-dpdz4
	b5ab23c2f1279       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   238a7746658f1       coredns-7c65d6cfc9-66txm
	b941bc429a5fd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   0edfbfa01ecb5       etcd-ha-929592
	7d7a8daefb0ea       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Exited              kube-controller-manager   1                   6e984099257ad       kube-controller-manager-ha-929592
	a61a19ee550f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Exited              kube-apiserver            2                   6d35c1613fd15       kube-apiserver-ha-929592
	8e96c2e442fde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   df2a40e486b68       kube-scheduler-ha-929592
	34c6ad67896f3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   e605a9e0100e5       busybox-7dff88458-49mwg
	9eb824a3acd10       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   69d86428b72f0       coredns-7c65d6cfc9-dpdz4
	06ffbf30c8c13       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   9b615a9a43e59       coredns-7c65d6cfc9-66txm
	fd34a54170b25       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   fc9e9c48c04be       kindnet-fw757
	c1571fb1d1d1f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   de29821ef5ba3       kube-proxy-6zqmd
	ac425bd016fb1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   282b521b3dea8       etcd-ha-929592
	972f797d73554       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   dbb138fdd1472       kube-scheduler-ha-929592
	
	
	==> coredns [06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f] <==
	[INFO] 10.244.0.4:42742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196447s
	[INFO] 10.244.2.2:34834 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000264331s
	[INFO] 10.244.2.2:59462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156407s
	[INFO] 10.244.2.2:42619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326596s
	[INFO] 10.244.2.2:44804 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179359s
	[INFO] 10.244.2.2:41911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132469s
	[INFO] 10.244.2.2:33102 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102993s
	[INFO] 10.244.1.2:55754 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139996s
	[INFO] 10.244.1.2:43056 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122452s
	[INFO] 10.244.1.2:48145 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077043s
	[INFO] 10.244.0.4:52337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165468s
	[INFO] 10.244.0.4:42536 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091889s
	[INFO] 10.244.0.4:44365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064388s
	[INFO] 10.244.2.2:55168 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124822s
	[INFO] 10.244.0.4:38549 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137185s
	[INFO] 10.244.0.4:50003 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132872s
	[INFO] 10.244.2.2:52393 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098256s
	[INFO] 10.244.2.2:57699 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088711s
	[INFO] 10.244.1.2:46863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018617s
	[INFO] 10.244.1.2:35487 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119162s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1956&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1967&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1977&timeout=8m31s&timeoutSeconds=511&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[737210940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:02.812) (total time: 10001ms):
	Trace[737210940]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:12.814)
	Trace[737210940]: [10.001974656s] [10.001974656s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1835578781]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:02.939) (total time: 10001ms):
	Trace[1835578781]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:12.941)
	Trace[1835578781]: [10.001397415s] [10.001397415s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35722->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35722->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35704->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35704->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35710->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17] <==
	[INFO] 10.244.0.4:59604 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010094s
	[INFO] 10.244.2.2:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134857s
	[INFO] 10.244.2.2:33999 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00156764s
	[INFO] 10.244.1.2:33236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120988s
	[INFO] 10.244.1.2:56330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001720435s
	[INFO] 10.244.1.2:55436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009185s
	[INFO] 10.244.1.2:57342 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009326s
	[INFO] 10.244.1.2:54076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109267s
	[INFO] 10.244.0.4:39214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088174s
	[INFO] 10.244.2.2:52535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132429s
	[INFO] 10.244.2.2:57308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131665s
	[INFO] 10.244.2.2:55789 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060892s
	[INFO] 10.244.1.2:51494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124082s
	[INFO] 10.244.1.2:52382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214777s
	[INFO] 10.244.1.2:43073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088643s
	[INFO] 10.244.1.2:44985 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084521s
	[INFO] 10.244.0.4:58067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132438s
	[INFO] 10.244.0.4:49916 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000488329s
	[INFO] 10.244.2.2:49651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189629s
	[INFO] 10.244.2.2:55778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106781s
	[INFO] 10.244.1.2:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160687s
	[INFO] 10.244.1.2:44082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162642s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1967&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[184978772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:00.208) (total time: 10001ms):
	Trace[184978772]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:10.209)
	Trace[184978772]: [10.001175025s] [10.001175025s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1477600604]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:02.599) (total time: 10001ms):
	Trace[1477600604]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:12.600)
	Trace[1477600604]: [10.001678587s] [10.001678587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36454->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36454->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-929592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_05_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:05:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:17:36 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:17:36 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:17:36 +0000   Sat, 14 Sep 2024 17:05:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:17:36 +0000   Sat, 14 Sep 2024 17:05:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-929592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5487ccf56549d9a2987da2958ebdfe
	  System UUID:                ca5487cc-f565-49d9-a298-7da2958ebdfe
	  Boot ID:                    b416a941-f6c5-4da6-ab3c-4ac7463bcedd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-49mwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-66txm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-dpdz4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-929592                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-fw757                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-929592             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-929592    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-6zqmd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-929592             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-929592                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 73s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-929592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-929592 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-929592 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-929592 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Warning  ContainerGCFailed        2m27s (x2 over 3m27s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m18s (x3 over 3m7s)   kubelet          Node ha-929592 status is now: NodeNotReady
	  Normal   RegisteredNode           84s                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           82s                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	
	
	Name:               ha-929592-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_06_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:06:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:18:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:18:15 +0000   Sat, 14 Sep 2024 17:17:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:18:15 +0000   Sat, 14 Sep 2024 17:17:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:18:15 +0000   Sat, 14 Sep 2024 17:17:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:18:15 +0000   Sat, 14 Sep 2024 17:17:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-929592-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba17c21a65b42848fb2de3d914ef47e
	  System UUID:                bba17c21-a65b-4284-8fb2-de3d914ef47e
	  Boot ID:                    0a772f2d-56c8-463a-a563-f23ec15ee87f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kvmx7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-929592-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-tnjsl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-929592-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-929592-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bcfkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-929592-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-929592-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 74s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-929592-m02 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     12m                  cidrAllocator    Node ha-929592-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-929592-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-929592-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  NodeNotReady             8m26s                node-controller  Node ha-929592-m02 status is now: NodeNotReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node ha-929592-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           82s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           39s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	
	
	Name:               ha-929592-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_07_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:07:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:18:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:18:34 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:18:34 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:18:34 +0000   Sat, 14 Sep 2024 17:07:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:18:34 +0000   Sat, 14 Sep 2024 17:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-929592-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bbc24177e214149a9c82a3c54652b96
	  System UUID:                5bbc2417-7e21-4149-a9c8-2a3c54652b96
	  Boot ID:                    3c23bc8a-2abf-4d34-816c-416114abeb74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4gtfl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-929592-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-j7mjh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-929592-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-929592-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-59tn8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-929592-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-929592-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 37s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-929592-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-929592-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-929592-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-929592-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal   RegisteredNode           84s                node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal   RegisteredNode           82s                node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s                kubelet          Node ha-929592-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s                kubelet          Node ha-929592-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s                kubelet          Node ha-929592-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 54s                kubelet          Node ha-929592-m03 has been rebooted, boot id: 3c23bc8a-2abf-4d34-816c-416114abeb74
	  Normal   RegisteredNode           39s                node-controller  Node ha-929592-m03 event: Registered Node ha-929592-m03 in Controller
	
	
	Name:               ha-929592-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_08_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:08:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:18:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:18:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:18:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:18:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:18:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-929592-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b38c12dc6ad945c88a69c031beae5593
	  System UUID:                b38c12dc-6ad9-45c8-8a69-c031beae5593
	  Boot ID:                    b95e0ff1-0fb1-43fb-8ad9-7ae34c9be1e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-x76g8       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-l7g8d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-929592-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-929592-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           10m                node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   NodeReady                9m28s              kubelet          Node ha-929592-m04 status is now: NodeReady
	  Normal   RegisteredNode           84s                node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   RegisteredNode           82s                node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   NodeNotReady             44s                node-controller  Node ha-929592-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-929592-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-929592-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-929592-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-929592-m04 has been rebooted, boot id: b95e0ff1-0fb1-43fb-8ad9-7ae34c9be1e5
	  Normal   NodeReady                8s                 kubelet          Node ha-929592-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055031] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061916] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.180150] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.131339] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.280240] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.763196] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.977772] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.069092] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951305] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.081826] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.069011] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.752479] kauditd_printk_skb: 31 callbacks suppressed
	[Sep14 17:06] kauditd_printk_skb: 24 callbacks suppressed
	[Sep14 17:13] kauditd_printk_skb: 1 callbacks suppressed
	[Sep14 17:16] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +0.155122] systemd-fstab-generator[3652]: Ignoring "noauto" option for root device
	[  +0.181142] systemd-fstab-generator[3666]: Ignoring "noauto" option for root device
	[  +0.141257] systemd-fstab-generator[3678]: Ignoring "noauto" option for root device
	[  +0.294661] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +7.262565] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.086720] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.527882] kauditd_printk_skb: 12 callbacks suppressed
	[Sep14 17:17] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.057912] kauditd_printk_skb: 1 callbacks suppressed
	[ +23.971326] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a] <==
	{"level":"warn","ts":"2024-09-14T17:15:11.603518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T17:15:10.780672Z","time spent":"822.836675ms","remote":"127.0.0.1:56378","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	2024/09/14 17:15:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-14T17:15:11.638832Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:15:11.638957Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T17:15:11.639156Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"731f5c40d4af6217","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-14T17:15:11.639557Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.639778Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.639872Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640046Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640142Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640201Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640223Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640231Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640244Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640273Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640366Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640407Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640431Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.643890Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"warn","ts":"2024-09-14T17:15:11.643941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.889179795s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-14T17:15:11.643990Z","caller":"traceutil/trace.go:171","msg":"trace[1102897918] range","detail":"{range_begin:; range_end:; }","duration":"8.889248444s","start":"2024-09-14T17:15:02.754732Z","end":"2024-09-14T17:15:11.643981Z","steps":["trace[1102897918] 'agreement among raft nodes before linearized reading'  (duration: 8.889177249s)"],"step_count":1}
	{"level":"error","ts":"2024-09-14T17:15:11.644040Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-14T17:15:11.644155Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-09-14T17:15:11.644663Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-929592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> etcd [b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559] <==
	{"level":"warn","ts":"2024-09-14T17:17:57.748130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:17:57.794286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:17:57.894376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:17:57.994390Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"731f5c40d4af6217","from":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-14T17:17:59.157328Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f7b50c386fd91100","rtt":"0s","error":"dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:17:59.157429Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f7b50c386fd91100","rtt":"0s","error":"dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:00.144328Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.39:2380/version","remote-member-id":"f7b50c386fd91100","error":"Get \"https://192.168.39.39:2380/version\": dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:00.144469Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f7b50c386fd91100","error":"Get \"https://192.168.39.39:2380/version\": dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:04.147252Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.39:2380/version","remote-member-id":"f7b50c386fd91100","error":"Get \"https://192.168.39.39:2380/version\": dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:04.147381Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f7b50c386fd91100","error":"Get \"https://192.168.39.39:2380/version\": dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:04.158272Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f7b50c386fd91100","rtt":"0s","error":"dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:04.158401Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f7b50c386fd91100","rtt":"0s","error":"dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:08.149748Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.39:2380/version","remote-member-id":"f7b50c386fd91100","error":"Get \"https://192.168.39.39:2380/version\": dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:08.149911Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f7b50c386fd91100","error":"Get \"https://192.168.39.39:2380/version\": dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:09.158479Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f7b50c386fd91100","rtt":"0s","error":"dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-14T17:18:09.158513Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f7b50c386fd91100","rtt":"0s","error":"dial tcp 192.168.39.39:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-14T17:18:10.790426Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.793723Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.805876Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.816265Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"731f5c40d4af6217","to":"f7b50c386fd91100","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-14T17:18:10.816333Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.816817Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"731f5c40d4af6217","to":"f7b50c386fd91100","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-14T17:18:10.816884Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:17.222959Z","caller":"traceutil/trace.go:171","msg":"trace[2049881545] transaction","detail":"{read_only:false; response_revision:2479; number_of_response:1; }","duration":"111.548732ms","start":"2024-09-14T17:18:17.111384Z","end":"2024-09-14T17:18:17.222933Z","steps":["trace[2049881545] 'process raft request'  (duration: 111.411244ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:18:25.880197Z","caller":"traceutil/trace.go:171","msg":"trace[629286354] transaction","detail":"{read_only:false; response_revision:2519; number_of_response:1; }","duration":"109.843563ms","start":"2024-09-14T17:18:25.770337Z","end":"2024-09-14T17:18:25.880181Z","steps":["trace[629286354] 'process raft request'  (duration: 109.746079ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:18:57 up 14 min,  0 users,  load average: 0.60, 0.52, 0.34
	Linux ha-929592 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431] <==
	I0914 17:18:19.432054       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:18:29.433634       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:18:29.433848       1 main.go:299] handling current node
	I0914 17:18:29.433899       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:18:29.433922       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:18:29.434091       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:18:29.434117       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:18:29.434229       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:18:29.434259       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:18:39.433024       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:18:39.433109       1 main.go:299] handling current node
	I0914 17:18:39.433152       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:18:39.433178       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:18:39.433343       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:18:39.433408       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:18:39.433528       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:18:39.433557       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:18:49.430027       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:18:49.430196       1 main.go:299] handling current node
	I0914 17:18:49.430237       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:18:49.430263       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:18:49.430788       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:18:49.430874       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:18:49.431048       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:18:49.431075       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931] <==
	I0914 17:14:46.123684       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:14:46.123803       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:14:46.123960       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:14:46.123985       1 main.go:299] handling current node
	I0914 17:14:46.124006       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:14:46.124021       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:14:46.124103       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:14:46.124121       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:14:56.127429       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:14:56.127550       1 main.go:299] handling current node
	I0914 17:14:56.127632       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:14:56.127659       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:14:56.127866       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:14:56.127918       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:14:56.128035       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:14:56.128075       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	E0914 17:14:58.847174       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2016&timeout=6m26s&timeoutSeconds=386&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0914 17:15:06.127815       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:15:06.127939       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:15:06.128135       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:15:06.128164       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:15:06.128226       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:15:06.128245       1 main.go:299] handling current node
	I0914 17:15:06.128296       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:15:06.128314       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda] <==
	I0914 17:17:32.005210       1 controller.go:78] Starting OpenAPI AggregationController
	I0914 17:17:32.005761       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0914 17:17:32.151777       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:17:32.151819       1 policy_source.go:224] refreshing policies
	I0914 17:17:32.201690       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 17:17:32.201870       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 17:17:32.201897       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 17:17:32.202396       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 17:17:32.202434       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 17:17:32.203324       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 17:17:32.205257       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 17:17:32.205402       1 aggregator.go:171] initial CRD sync complete...
	I0914 17:17:32.205487       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 17:17:32.205518       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 17:17:32.205551       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:17:32.207057       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 17:17:32.207096       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0914 17:17:32.219141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.39]
	I0914 17:17:32.222109       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 17:17:32.231642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0914 17:17:32.236955       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0914 17:17:32.247881       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 17:17:32.262240       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 17:17:33.010023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 17:17:33.355807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.39 192.168.39.54]
	
	
	==> kube-apiserver [a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7] <==
	I0914 17:16:58.629003       1 options.go:228] external host was not specified, using 192.168.39.54
	I0914 17:16:58.643250       1 server.go:142] Version: v1.31.1
	I0914 17:16:58.643313       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:16:59.138915       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0914 17:16:59.171144       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:16:59.177893       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0914 17:16:59.178737       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0914 17:16:59.179040       1 instance.go:232] Using reconciler: lease
	W0914 17:17:19.132807       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0914 17:17:19.133691       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0914 17:17:19.180685       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965] <==
	I0914 17:16:59.457436       1 serving.go:386] Generated self-signed cert in-memory
	I0914 17:16:59.975746       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 17:16:59.975838       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:16:59.977668       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 17:16:59.977910       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 17:16:59.978418       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 17:16:59.978467       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 17:17:20.188675       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.54:8443/healthz\": dial tcp 192.168.39.54:8443: connect: connection refused"
	
	
	==> kube-controller-manager [876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803] <==
	I0914 17:17:46.653784       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"77c1f8c5-54e6-464a-975a-aa4d8c587d77", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-cfz7c": the object has been modified; please apply your changes to the latest version and try again
	I0914 17:17:46.705105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="111.946174ms"
	I0914 17:17:46.706746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="184.408µs"
	I0914 17:17:46.719129       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-cfz7c\": the object has been modified; please apply your changes to the latest version and try again"
	I0914 17:17:46.719551       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"77c1f8c5-54e6-464a-975a-aa4d8c587d77", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-cfz7c": the object has been modified; please apply your changes to the latest version and try again
	I0914 17:17:46.724142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="99.882799ms"
	I0914 17:17:46.803802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.48116ms"
	I0914 17:17:46.804186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="166.897µs"
	I0914 17:18:03.769343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m03"
	I0914 17:18:04.767362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.155326ms"
	I0914 17:18:04.767692       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="153.084µs"
	I0914 17:18:13.933247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:13.961892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:15.141028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:18:15.672203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:18.576407       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:18.663056       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:19.074019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:23.101192       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.683964ms"
	I0914 17:18:23.101379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.551µs"
	I0914 17:18:34.008473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m03"
	I0914 17:18:49.611917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-929592-m04"
	I0914 17:18:49.612071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:49.626983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:18:50.643201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	
	
	==> kube-proxy [ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:17:02.880632       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:05.951737       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:09.024281       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:15.168772       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:24.383143       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0914 17:17:43.491896       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0914 17:17:43.492032       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:17:43.525047       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:17:43.525110       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:17:43.525142       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:17:43.527204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:17:43.527527       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:17:43.527557       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:17:43.530062       1 config.go:199] "Starting service config controller"
	I0914 17:17:43.530107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:17:43.530140       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:17:43.530156       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:17:43.533483       1 config.go:328] "Starting node config controller"
	I0914 17:17:43.533533       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:17:43.630278       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:17:43.630384       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:17:43.634198       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849] <==
	E0914 17:13:58.559831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:13:58.559891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:13:58.559932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:04.703968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:04.704081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:04.704454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:04.704650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:04.704753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:04.704810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:13.919996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:13.920069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:13.920099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:13.920124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:16.992435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:16.992926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:32.351451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:32.351936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:35.424275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:35.424521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:41.568495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:41.568688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:59.999214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:59.999559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:15:06.144453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:15:06.144535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e] <==
	W0914 17:17:27.765846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.54:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:27.765888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.54:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:27.942196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:27.942250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:28.317534       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.54:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:28.317639       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.54:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:28.426963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:28.427017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.026659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.54:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.026711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.54:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.048530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.54:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.048712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.54:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.191539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.191716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.448797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.448916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.559423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.559503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:32.022266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 17:17:32.022321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:17:32.022433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 17:17:32.022463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:17:32.022515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 17:17:32.022542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0914 17:17:37.402904       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb] <==
	E0914 17:08:42.973360       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:42.977406       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ae77fbbd-0eba-4e1d-add0-d894e73795c1(kube-system/kube-proxy-ll6r9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ll6r9"
	E0914 17:08:42.977758       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-ll6r9"
	I0914 17:08:42.977890       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:44.830679       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lrzhr" node="ha-929592-m04"
	E0914 17:08:44.830996       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-lrzhr"
	E0914 17:08:44.831750       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 858b1075-344d-4b2d-baed-8eea46a2f708(kube-system/kube-proxy-thwhv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-thwhv"
	E0914 17:08:44.837157       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-thwhv"
	I0914 17:08:44.837232       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837022       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	E0914 17:08:44.839305       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bdb91643-a0e4-4162-aeb3-0d94749f04df(kube-system/kube-proxy-l7g8d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-l7g8d"
	E0914 17:08:44.839486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-l7g8d"
	I0914 17:08:44.839536       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	E0914 17:15:04.055830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0914 17:15:05.672559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0914 17:15:06.570552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0914 17:15:06.985854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0914 17:15:07.641551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0914 17:15:08.241189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0914 17:15:08.759986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0914 17:15:09.957710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0914 17:15:10.100388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0914 17:15:10.526092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0914 17:15:11.550061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 14 17:18:05 ha-929592 kubelet[1305]: I0914 17:18:05.063024    1305 scope.go:117] "RemoveContainer" containerID="df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690"
	Sep 14 17:18:05 ha-929592 kubelet[1305]: E0914 17:18:05.063513    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4f486484-9641-4e23-8bc9-4dcae57b621a)\"" pod="kube-system/storage-provisioner" podUID="4f486484-9641-4e23-8bc9-4dcae57b621a"
	Sep 14 17:18:10 ha-929592 kubelet[1305]: E0914 17:18:10.267725    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334290267440794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:10 ha-929592 kubelet[1305]: E0914 17:18:10.267768    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334290267440794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:19 ha-929592 kubelet[1305]: I0914 17:18:19.063070    1305 scope.go:117] "RemoveContainer" containerID="df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690"
	Sep 14 17:18:19 ha-929592 kubelet[1305]: E0914 17:18:19.063210    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4f486484-9641-4e23-8bc9-4dcae57b621a)\"" pod="kube-system/storage-provisioner" podUID="4f486484-9641-4e23-8bc9-4dcae57b621a"
	Sep 14 17:18:20 ha-929592 kubelet[1305]: E0914 17:18:20.270223    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334300269624230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:20 ha-929592 kubelet[1305]: E0914 17:18:20.270514    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334300269624230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:30 ha-929592 kubelet[1305]: I0914 17:18:30.064429    1305 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-929592" podUID="8bec83fe-1516-467a-9575-3c55dbcbda23"
	Sep 14 17:18:30 ha-929592 kubelet[1305]: E0914 17:18:30.088431    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:18:30 ha-929592 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:18:30 ha-929592 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:18:30 ha-929592 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:18:30 ha-929592 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:18:30 ha-929592 kubelet[1305]: I0914 17:18:30.088737    1305 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-929592"
	Sep 14 17:18:30 ha-929592 kubelet[1305]: E0914 17:18:30.274477    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334310274132037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:30 ha-929592 kubelet[1305]: E0914 17:18:30.274528    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334310274132037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:31 ha-929592 kubelet[1305]: I0914 17:18:31.062930    1305 scope.go:117] "RemoveContainer" containerID="df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690"
	Sep 14 17:18:31 ha-929592 kubelet[1305]: E0914 17:18:31.063089    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4f486484-9641-4e23-8bc9-4dcae57b621a)\"" pod="kube-system/storage-provisioner" podUID="4f486484-9641-4e23-8bc9-4dcae57b621a"
	Sep 14 17:18:40 ha-929592 kubelet[1305]: E0914 17:18:40.278813    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334320277781082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:40 ha-929592 kubelet[1305]: E0914 17:18:40.279648    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334320277781082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:46 ha-929592 kubelet[1305]: I0914 17:18:46.062924    1305 scope.go:117] "RemoveContainer" containerID="df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690"
	Sep 14 17:18:46 ha-929592 kubelet[1305]: E0914 17:18:46.063119    1305 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4f486484-9641-4e23-8bc9-4dcae57b621a)\"" pod="kube-system/storage-provisioner" podUID="4f486484-9641-4e23-8bc9-4dcae57b621a"
	Sep 14 17:18:50 ha-929592 kubelet[1305]: E0914 17:18:50.282639    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334330282231016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:18:50 ha-929592 kubelet[1305]: E0914 17:18:50.282942    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334330282231016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:18:56.153316   35491 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19643-8806/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-929592 -n ha-929592
helpers_test.go:261: (dbg) Run:  kubectl --context ha-929592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (349.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 stop -v=7 --alsologtostderr: exit status 82 (2m0.473580292s)

                                                
                                                
-- stdout --
	* Stopping node "ha-929592-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:19:15.760366   35905 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:19:15.760625   35905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:19:15.760636   35905 out.go:358] Setting ErrFile to fd 2...
	I0914 17:19:15.760641   35905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:19:15.760875   35905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:19:15.761165   35905 out.go:352] Setting JSON to false
	I0914 17:19:15.761266   35905 mustload.go:65] Loading cluster: ha-929592
	I0914 17:19:15.761680   35905 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:19:15.761772   35905 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:19:15.761968   35905 mustload.go:65] Loading cluster: ha-929592
	I0914 17:19:15.762143   35905 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:19:15.762206   35905 stop.go:39] StopHost: ha-929592-m04
	I0914 17:19:15.762636   35905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:19:15.762682   35905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:19:15.777689   35905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0914 17:19:15.778193   35905 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:19:15.778854   35905 main.go:141] libmachine: Using API Version  1
	I0914 17:19:15.778877   35905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:19:15.779315   35905 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:19:15.781977   35905 out.go:177] * Stopping node "ha-929592-m04"  ...
	I0914 17:19:15.783078   35905 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 17:19:15.783112   35905 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:19:15.783416   35905 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 17:19:15.783441   35905 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:19:15.786728   35905 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:19:15.787328   35905 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:18:44 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:19:15.787416   35905 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:19:15.787507   35905 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:19:15.787721   35905 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:19:15.787866   35905 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:19:15.788053   35905 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	I0914 17:19:15.868346   35905 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 17:19:15.921561   35905 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 17:19:15.974270   35905 main.go:141] libmachine: Stopping "ha-929592-m04"...
	I0914 17:19:15.974298   35905 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:19:15.975782   35905 main.go:141] libmachine: (ha-929592-m04) Calling .Stop
	I0914 17:19:15.979293   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 0/120
	I0914 17:19:16.980958   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 1/120
	I0914 17:19:17.982605   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 2/120
	I0914 17:19:18.983987   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 3/120
	I0914 17:19:19.985431   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 4/120
	I0914 17:19:20.987582   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 5/120
	I0914 17:19:21.988975   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 6/120
	I0914 17:19:22.990417   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 7/120
	I0914 17:19:23.992491   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 8/120
	I0914 17:19:24.993854   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 9/120
	I0914 17:19:25.996052   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 10/120
	I0914 17:19:26.998204   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 11/120
	I0914 17:19:27.999531   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 12/120
	I0914 17:19:29.001160   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 13/120
	I0914 17:19:30.003551   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 14/120
	I0914 17:19:31.005459   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 15/120
	I0914 17:19:32.006659   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 16/120
	I0914 17:19:33.008062   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 17/120
	I0914 17:19:34.009303   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 18/120
	I0914 17:19:35.010823   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 19/120
	I0914 17:19:36.012722   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 20/120
	I0914 17:19:37.014643   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 21/120
	I0914 17:19:38.016688   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 22/120
	I0914 17:19:39.018257   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 23/120
	I0914 17:19:40.019659   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 24/120
	I0914 17:19:41.021570   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 25/120
	I0914 17:19:42.022885   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 26/120
	I0914 17:19:43.024259   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 27/120
	I0914 17:19:44.025852   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 28/120
	I0914 17:19:45.028091   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 29/120
	I0914 17:19:46.030442   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 30/120
	I0914 17:19:47.032070   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 31/120
	I0914 17:19:48.033407   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 32/120
	I0914 17:19:49.034838   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 33/120
	I0914 17:19:50.036489   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 34/120
	I0914 17:19:51.038418   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 35/120
	I0914 17:19:52.040677   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 36/120
	I0914 17:19:53.042351   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 37/120
	I0914 17:19:54.043865   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 38/120
	I0914 17:19:55.045365   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 39/120
	I0914 17:19:56.047421   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 40/120
	I0914 17:19:57.048524   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 41/120
	I0914 17:19:58.050113   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 42/120
	I0914 17:19:59.051616   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 43/120
	I0914 17:20:00.053969   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 44/120
	I0914 17:20:01.055988   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 45/120
	I0914 17:20:02.057321   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 46/120
	I0914 17:20:03.059197   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 47/120
	I0914 17:20:04.060963   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 48/120
	I0914 17:20:05.062680   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 49/120
	I0914 17:20:06.064573   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 50/120
	I0914 17:20:07.066151   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 51/120
	I0914 17:20:08.067314   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 52/120
	I0914 17:20:09.068828   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 53/120
	I0914 17:20:10.070308   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 54/120
	I0914 17:20:11.071949   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 55/120
	I0914 17:20:12.073717   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 56/120
	I0914 17:20:13.075068   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 57/120
	I0914 17:20:14.076601   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 58/120
	I0914 17:20:15.078034   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 59/120
	I0914 17:20:16.079751   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 60/120
	I0914 17:20:17.081927   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 61/120
	I0914 17:20:18.083360   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 62/120
	I0914 17:20:19.084805   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 63/120
	I0914 17:20:20.086267   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 64/120
	I0914 17:20:21.088124   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 65/120
	I0914 17:20:22.090630   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 66/120
	I0914 17:20:23.092120   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 67/120
	I0914 17:20:24.093604   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 68/120
	I0914 17:20:25.094837   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 69/120
	I0914 17:20:26.096811   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 70/120
	I0914 17:20:27.098281   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 71/120
	I0914 17:20:28.100388   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 72/120
	I0914 17:20:29.102040   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 73/120
	I0914 17:20:30.103899   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 74/120
	I0914 17:20:31.105858   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 75/120
	I0914 17:20:32.108355   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 76/120
	I0914 17:20:33.109844   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 77/120
	I0914 17:20:34.111265   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 78/120
	I0914 17:20:35.112673   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 79/120
	I0914 17:20:36.114876   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 80/120
	I0914 17:20:37.116324   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 81/120
	I0914 17:20:38.117839   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 82/120
	I0914 17:20:39.119892   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 83/120
	I0914 17:20:40.121294   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 84/120
	I0914 17:20:41.123209   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 85/120
	I0914 17:20:42.124811   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 86/120
	I0914 17:20:43.126345   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 87/120
	I0914 17:20:44.128684   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 88/120
	I0914 17:20:45.130234   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 89/120
	I0914 17:20:46.132751   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 90/120
	I0914 17:20:47.134381   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 91/120
	I0914 17:20:48.137028   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 92/120
	I0914 17:20:49.138338   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 93/120
	I0914 17:20:50.140848   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 94/120
	I0914 17:20:51.143002   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 95/120
	I0914 17:20:52.144488   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 96/120
	I0914 17:20:53.145783   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 97/120
	I0914 17:20:54.147350   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 98/120
	I0914 17:20:55.148963   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 99/120
	I0914 17:20:56.151131   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 100/120
	I0914 17:20:57.152491   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 101/120
	I0914 17:20:58.153685   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 102/120
	I0914 17:20:59.155182   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 103/120
	I0914 17:21:00.156716   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 104/120
	I0914 17:21:01.158704   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 105/120
	I0914 17:21:02.160755   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 106/120
	I0914 17:21:03.162453   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 107/120
	I0914 17:21:04.163962   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 108/120
	I0914 17:21:05.165469   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 109/120
	I0914 17:21:06.167817   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 110/120
	I0914 17:21:07.169596   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 111/120
	I0914 17:21:08.171854   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 112/120
	I0914 17:21:09.173486   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 113/120
	I0914 17:21:10.174896   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 114/120
	I0914 17:21:11.176726   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 115/120
	I0914 17:21:12.178017   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 116/120
	I0914 17:21:13.179482   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 117/120
	I0914 17:21:14.180990   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 118/120
	I0914 17:21:15.182683   35905 main.go:141] libmachine: (ha-929592-m04) Waiting for machine to stop 119/120
	I0914 17:21:16.183320   35905 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 17:21:16.183397   35905 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 17:21:16.185226   35905 out.go:201] 
	W0914 17:21:16.186497   35905 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 17:21:16.186510   35905 out.go:270] * 
	* 
	W0914 17:21:16.188801   35905 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 17:21:16.189929   35905 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-929592 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr: exit status 3 (19.075553519s)

                                                
                                                
-- stdout --
	ha-929592
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929592-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:21:16.233962   36356 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:21:16.234061   36356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:21:16.234069   36356 out.go:358] Setting ErrFile to fd 2...
	I0914 17:21:16.234073   36356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:21:16.234276   36356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:21:16.234442   36356 out.go:352] Setting JSON to false
	I0914 17:21:16.234470   36356 mustload.go:65] Loading cluster: ha-929592
	I0914 17:21:16.234502   36356 notify.go:220] Checking for updates...
	I0914 17:21:16.234852   36356 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:21:16.234868   36356 status.go:255] checking status of ha-929592 ...
	I0914 17:21:16.235285   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.235355   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.253349   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0914 17:21:16.253842   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.254439   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.254466   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.254808   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.255015   36356 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:21:16.256556   36356 status.go:330] ha-929592 host status = "Running" (err=<nil>)
	I0914 17:21:16.256577   36356 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:21:16.256863   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.256899   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.271241   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0914 17:21:16.271696   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.272183   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.272209   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.272570   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.272723   36356 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:21:16.275541   36356 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:21:16.276038   36356 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:21:16.276073   36356 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:21:16.276158   36356 host.go:66] Checking if "ha-929592" exists ...
	I0914 17:21:16.276523   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.276580   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.291830   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0914 17:21:16.292311   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.292819   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.292841   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.293133   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.293324   36356 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:21:16.293495   36356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:21:16.293530   36356 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:21:16.296291   36356 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:21:16.296744   36356 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:21:16.296766   36356 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:21:16.296936   36356 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:21:16.297133   36356 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:21:16.297270   36356 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:21:16.297425   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:21:16.387344   36356 ssh_runner.go:195] Run: systemctl --version
	I0914 17:21:16.394440   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:21:16.414972   36356 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:21:16.415008   36356 api_server.go:166] Checking apiserver status ...
	I0914 17:21:16.415041   36356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:21:16.438998   36356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4927/cgroup
	W0914 17:21:16.450607   36356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4927/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:21:16.450669   36356 ssh_runner.go:195] Run: ls
	I0914 17:21:16.456105   36356 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:21:16.463230   36356 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:21:16.463252   36356 status.go:422] ha-929592 apiserver status = Running (err=<nil>)
	I0914 17:21:16.463262   36356 status.go:257] ha-929592 status: &{Name:ha-929592 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:21:16.463289   36356 status.go:255] checking status of ha-929592-m02 ...
	I0914 17:21:16.463580   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.463628   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.478759   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I0914 17:21:16.479201   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.479647   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.479666   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.480056   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.480251   36356 main.go:141] libmachine: (ha-929592-m02) Calling .GetState
	I0914 17:21:16.482045   36356 status.go:330] ha-929592-m02 host status = "Running" (err=<nil>)
	I0914 17:21:16.482060   36356 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:21:16.482400   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.482443   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.497031   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0914 17:21:16.497485   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.497930   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.497955   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.498255   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.498438   36356 main.go:141] libmachine: (ha-929592-m02) Calling .GetIP
	I0914 17:21:16.500887   36356 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:21:16.501266   36356 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:17:02 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:21:16.501282   36356 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:21:16.501388   36356 host.go:66] Checking if "ha-929592-m02" exists ...
	I0914 17:21:16.501690   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.501742   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.516326   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I0914 17:21:16.516761   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.517242   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.517262   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.517592   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.517744   36356 main.go:141] libmachine: (ha-929592-m02) Calling .DriverName
	I0914 17:21:16.517916   36356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:21:16.517946   36356 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHHostname
	I0914 17:21:16.520939   36356 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:21:16.521417   36356 main.go:141] libmachine: (ha-929592-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:9e:43", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:17:02 +0000 UTC Type:0 Mac:52:54:00:23:9e:43 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-929592-m02 Clientid:01:52:54:00:23:9e:43}
	I0914 17:21:16.521441   36356 main.go:141] libmachine: (ha-929592-m02) DBG | domain ha-929592-m02 has defined IP address 192.168.39.148 and MAC address 52:54:00:23:9e:43 in network mk-ha-929592
	I0914 17:21:16.521580   36356 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHPort
	I0914 17:21:16.521757   36356 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHKeyPath
	I0914 17:21:16.521886   36356 main.go:141] libmachine: (ha-929592-m02) Calling .GetSSHUsername
	I0914 17:21:16.522020   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m02/id_rsa Username:docker}
	I0914 17:21:16.607225   36356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:21:16.623664   36356 kubeconfig.go:125] found "ha-929592" server: "https://192.168.39.254:8443"
	I0914 17:21:16.623687   36356 api_server.go:166] Checking apiserver status ...
	I0914 17:21:16.623715   36356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:21:16.639613   36356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0914 17:21:16.650061   36356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:21:16.650122   36356 ssh_runner.go:195] Run: ls
	I0914 17:21:16.654185   36356 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0914 17:21:16.658316   36356 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0914 17:21:16.658344   36356 status.go:422] ha-929592-m02 apiserver status = Running (err=<nil>)
	I0914 17:21:16.658354   36356 status.go:257] ha-929592-m02 status: &{Name:ha-929592-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:21:16.658371   36356 status.go:255] checking status of ha-929592-m04 ...
	I0914 17:21:16.658689   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.658724   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.673520   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0914 17:21:16.674013   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.674548   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.674569   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.674913   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.675076   36356 main.go:141] libmachine: (ha-929592-m04) Calling .GetState
	I0914 17:21:16.676565   36356 status.go:330] ha-929592-m04 host status = "Running" (err=<nil>)
	I0914 17:21:16.676578   36356 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:21:16.676904   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.676965   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.691987   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0914 17:21:16.692480   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.692931   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.692952   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.693269   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.693385   36356 main.go:141] libmachine: (ha-929592-m04) Calling .GetIP
	I0914 17:21:16.696320   36356 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:21:16.696883   36356 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:18:44 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:21:16.696910   36356 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:21:16.697069   36356 host.go:66] Checking if "ha-929592-m04" exists ...
	I0914 17:21:16.697444   36356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:21:16.697481   36356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:21:16.712489   36356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I0914 17:21:16.712915   36356 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:21:16.713441   36356 main.go:141] libmachine: Using API Version  1
	I0914 17:21:16.713461   36356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:21:16.713808   36356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:21:16.714016   36356 main.go:141] libmachine: (ha-929592-m04) Calling .DriverName
	I0914 17:21:16.714293   36356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:21:16.714316   36356 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHHostname
	I0914 17:21:16.716811   36356 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:21:16.717180   36356 main.go:141] libmachine: (ha-929592-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:18:a1", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:18:44 +0000 UTC Type:0 Mac:52:54:00:7a:18:a1 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-929592-m04 Clientid:01:52:54:00:7a:18:a1}
	I0914 17:21:16.717202   36356 main.go:141] libmachine: (ha-929592-m04) DBG | domain ha-929592-m04 has defined IP address 192.168.39.51 and MAC address 52:54:00:7a:18:a1 in network mk-ha-929592
	I0914 17:21:16.717332   36356 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHPort
	I0914 17:21:16.717469   36356 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHKeyPath
	I0914 17:21:16.717596   36356 main.go:141] libmachine: (ha-929592-m04) Calling .GetSSHUsername
	I0914 17:21:16.717721   36356 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592-m04/id_rsa Username:docker}
	W0914 17:21:35.266459   36356 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0914 17:21:35.266562   36356 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0914 17:21:35.266576   36356 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0914 17:21:35.266583   36356 status.go:257] ha-929592-m04 status: &{Name:ha-929592-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0914 17:21:35.266603   36356 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-929592 -n ha-929592
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-929592 logs -n 25: (1.657540783s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m04 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp testdata/cp-test.txt                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592:/home/docker/cp-test_ha-929592-m04_ha-929592.txt                      |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592 sudo cat                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592.txt                                |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m02:/home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m02 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m03:/home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n                                                                | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | ha-929592-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-929592 ssh -n ha-929592-m03 sudo cat                                         | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC | 14 Sep 24 17:09 UTC |
	|         | /home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-929592 node stop m02 -v=7                                                    | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:09 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-929592 node start m02 -v=7                                                   | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-929592 -v=7                                                          | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-929592 -v=7                                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-929592 --wait=true -v=7                                                   | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:15 UTC | 14 Sep 24 17:18 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-929592                                                               | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:18 UTC |                     |
	| node    | ha-929592 node delete m03 -v=7                                                  | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:18 UTC | 14 Sep 24 17:19 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-929592 stop -v=7                                                             | ha-929592 | jenkins | v1.34.0 | 14 Sep 24 17:19 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:15:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:15:10.602753   33797 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:15:10.602849   33797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:15:10.602854   33797 out.go:358] Setting ErrFile to fd 2...
	I0914 17:15:10.602858   33797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:15:10.603035   33797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:15:10.603549   33797 out.go:352] Setting JSON to false
	I0914 17:15:10.604450   33797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3455,"bootTime":1726330656,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:15:10.604539   33797 start.go:139] virtualization: kvm guest
	I0914 17:15:10.606694   33797 out.go:177] * [ha-929592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:15:10.607843   33797 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:15:10.607848   33797 notify.go:220] Checking for updates...
	I0914 17:15:10.610014   33797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:15:10.611077   33797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:15:10.612167   33797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:15:10.613267   33797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:15:10.614470   33797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:15:10.616166   33797 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:15:10.616290   33797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:15:10.616765   33797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:15:10.616809   33797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:15:10.634573   33797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0914 17:15:10.635108   33797 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:15:10.635658   33797 main.go:141] libmachine: Using API Version  1
	I0914 17:15:10.635677   33797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:15:10.636064   33797 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:15:10.636258   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:15:10.672119   33797 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 17:15:10.673127   33797 start.go:297] selected driver: kvm2
	I0914 17:15:10.673140   33797 start.go:901] validating driver "kvm2" against &{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:15:10.673277   33797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:15:10.673618   33797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:15:10.673694   33797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:15:10.689269   33797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:15:10.689980   33797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:15:10.690018   33797 cni.go:84] Creating CNI manager for ""
	I0914 17:15:10.690064   33797 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 17:15:10.690122   33797 start.go:340] cluster config:
	{Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:15:10.690298   33797 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:15:10.691995   33797 out.go:177] * Starting "ha-929592" primary control-plane node in "ha-929592" cluster
	I0914 17:15:10.692880   33797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:15:10.692923   33797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 17:15:10.692930   33797 cache.go:56] Caching tarball of preloaded images
	I0914 17:15:10.693013   33797 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:15:10.693026   33797 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:15:10.693156   33797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/config.json ...
	I0914 17:15:10.693347   33797 start.go:360] acquireMachinesLock for ha-929592: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:15:10.693391   33797 start.go:364] duration metric: took 26.138µs to acquireMachinesLock for "ha-929592"
	I0914 17:15:10.693409   33797 start.go:96] Skipping create...Using existing machine configuration
	I0914 17:15:10.693423   33797 fix.go:54] fixHost starting: 
	I0914 17:15:10.693699   33797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:15:10.693736   33797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:15:10.709624   33797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0914 17:15:10.709986   33797 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:15:10.710523   33797 main.go:141] libmachine: Using API Version  1
	I0914 17:15:10.710551   33797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:15:10.710858   33797 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:15:10.711072   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:15:10.711185   33797 main.go:141] libmachine: (ha-929592) Calling .GetState
	I0914 17:15:10.712907   33797 fix.go:112] recreateIfNeeded on ha-929592: state=Running err=<nil>
	W0914 17:15:10.712942   33797 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 17:15:10.714848   33797 out.go:177] * Updating the running kvm2 "ha-929592" VM ...
	I0914 17:15:10.715854   33797 machine.go:93] provisionDockerMachine start ...
	I0914 17:15:10.715875   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:15:10.716054   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:10.718601   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.719090   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:10.719111   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.719254   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:10.719412   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.719559   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.719672   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:10.719819   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:10.720047   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:10.720062   33797 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 17:15:10.838978   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592
	
	I0914 17:15:10.839012   33797 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:15:10.839228   33797 buildroot.go:166] provisioning hostname "ha-929592"
	I0914 17:15:10.839250   33797 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:15:10.839408   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:10.841950   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.842336   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:10.842365   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.842479   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:10.842637   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.842752   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.842840   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:10.842946   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:10.843182   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:10.843204   33797 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-929592 && echo "ha-929592" | sudo tee /etc/hostname
	I0914 17:15:10.973633   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-929592
	
	I0914 17:15:10.973660   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:10.976238   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.976669   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:10.976697   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:10.976880   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:10.977071   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.977229   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:10.977344   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:10.977532   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:10.977718   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:10.977739   33797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-929592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-929592/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-929592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:15:11.090972   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:15:11.090995   33797 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:15:11.091028   33797 buildroot.go:174] setting up certificates
	I0914 17:15:11.091036   33797 provision.go:84] configureAuth start
	I0914 17:15:11.091046   33797 main.go:141] libmachine: (ha-929592) Calling .GetMachineName
	I0914 17:15:11.091295   33797 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:15:11.093895   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.094260   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.094299   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.094430   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:11.096417   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.096750   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.096769   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.096859   33797 provision.go:143] copyHostCerts
	I0914 17:15:11.096900   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:15:11.096932   33797 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:15:11.096941   33797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:15:11.097003   33797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:15:11.097083   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:15:11.097104   33797 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:15:11.097108   33797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:15:11.097132   33797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:15:11.097172   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:15:11.097188   33797 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:15:11.097192   33797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:15:11.097217   33797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:15:11.097261   33797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.ha-929592 san=[127.0.0.1 192.168.39.54 ha-929592 localhost minikube]
	I0914 17:15:11.263507   33797 provision.go:177] copyRemoteCerts
	I0914 17:15:11.263571   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:15:11.263593   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:11.266211   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.266533   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.266571   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.266695   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:11.266848   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:11.266968   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:11.267069   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:15:11.352949   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:15:11.353011   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:15:11.376038   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:15:11.376128   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0914 17:15:11.400362   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:15:11.400429   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:15:11.425531   33797 provision.go:87] duration metric: took 334.483325ms to configureAuth
	I0914 17:15:11.425561   33797 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:15:11.425778   33797 config.go:182] Loaded profile config "ha-929592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:15:11.425862   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:15:11.428475   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.428870   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:15:11.428895   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:15:11.429127   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:15:11.429294   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:11.429503   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:15:11.429646   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:15:11.429874   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:15:11.430064   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:15:11.430082   33797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:16:42.114577   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:16:42.114621   33797 machine.go:96] duration metric: took 1m31.398754249s to provisionDockerMachine
	I0914 17:16:42.114634   33797 start.go:293] postStartSetup for "ha-929592" (driver="kvm2")
	I0914 17:16:42.114648   33797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:16:42.114674   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.114982   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:16:42.115009   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.118220   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.118791   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.118818   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.119088   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.119254   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.119403   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.119539   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:16:42.209845   33797 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:16:42.214105   33797 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:16:42.214135   33797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:16:42.214213   33797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:16:42.214306   33797 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:16:42.214315   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:16:42.214400   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:16:42.223501   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:16:42.245903   33797 start.go:296] duration metric: took 131.252389ms for postStartSetup
	I0914 17:16:42.245942   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.246240   33797 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0914 17:16:42.246272   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.248809   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.249260   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.249281   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.249454   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.249660   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.249812   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.249947   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	W0914 17:16:42.336372   33797 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0914 17:16:42.336398   33797 fix.go:56] duration metric: took 1m31.642974944s for fixHost
	I0914 17:16:42.336420   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.339350   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.339840   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.339862   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.340052   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.340265   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.340399   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.340516   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.340670   33797 main.go:141] libmachine: Using SSH client type: native
	I0914 17:16:42.340875   33797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0914 17:16:42.340890   33797 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:16:42.454511   33797 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726334202.418081292
	
	I0914 17:16:42.454533   33797 fix.go:216] guest clock: 1726334202.418081292
	I0914 17:16:42.454541   33797 fix.go:229] Guest: 2024-09-14 17:16:42.418081292 +0000 UTC Remote: 2024-09-14 17:16:42.336405227 +0000 UTC m=+91.769197256 (delta=81.676065ms)
	I0914 17:16:42.454576   33797 fix.go:200] guest clock delta is within tolerance: 81.676065ms
	I0914 17:16:42.454581   33797 start.go:83] releasing machines lock for "ha-929592", held for 1m31.76118071s
	I0914 17:16:42.454600   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.454845   33797 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:16:42.457270   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.457846   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.457869   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.458066   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.458663   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.458831   33797 main.go:141] libmachine: (ha-929592) Calling .DriverName
	I0914 17:16:42.458929   33797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:16:42.458970   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.459028   33797 ssh_runner.go:195] Run: cat /version.json
	I0914 17:16:42.459053   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHHostname
	I0914 17:16:42.461742   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462045   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462224   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.462250   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462397   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:42.462411   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.462423   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:42.462572   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHPort
	I0914 17:16:42.462590   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.462729   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.462752   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHKeyPath
	I0914 17:16:42.462811   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:16:42.462904   33797 main.go:141] libmachine: (ha-929592) Calling .GetSSHUsername
	I0914 17:16:42.463011   33797 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/ha-929592/id_rsa Username:docker}
	I0914 17:16:42.543338   33797 ssh_runner.go:195] Run: systemctl --version
	I0914 17:16:42.579305   33797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:16:42.745322   33797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:16:42.752599   33797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:16:42.752663   33797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:16:42.761511   33797 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 17:16:42.761531   33797 start.go:495] detecting cgroup driver to use...
	I0914 17:16:42.761592   33797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:16:42.777948   33797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:16:42.792470   33797 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:16:42.792531   33797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:16:42.806346   33797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:16:42.820060   33797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:16:42.971162   33797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:16:43.121104   33797 docker.go:233] disabling docker service ...
	I0914 17:16:43.121170   33797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:16:43.137672   33797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:16:43.151261   33797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:16:43.296068   33797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:16:43.444321   33797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:16:43.474544   33797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:16:43.492833   33797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:16:43.492895   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.503326   33797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:16:43.503397   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.513553   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.523501   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.533609   33797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:16:43.543625   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.553720   33797 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.564651   33797 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:16:43.574688   33797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:16:43.583809   33797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:16:43.592901   33797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:16:43.735079   33797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:16:50.498812   33797 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.763701185s)
	I0914 17:16:50.498837   33797 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:16:50.498878   33797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:16:50.504217   33797 start.go:563] Will wait 60s for crictl version
	I0914 17:16:50.504267   33797 ssh_runner.go:195] Run: which crictl
	I0914 17:16:50.507855   33797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:16:50.550085   33797 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:16:50.550152   33797 ssh_runner.go:195] Run: crio --version
	I0914 17:16:50.578849   33797 ssh_runner.go:195] Run: crio --version
	I0914 17:16:50.607421   33797 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:16:50.608673   33797 main.go:141] libmachine: (ha-929592) Calling .GetIP
	I0914 17:16:50.611777   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:50.612196   33797 main.go:141] libmachine: (ha-929592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:cb:09", ip: ""} in network mk-ha-929592: {Iface:virbr1 ExpiryTime:2024-09-14 18:05:06 +0000 UTC Type:0 Mac:52:54:00:5c:cb:09 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-929592 Clientid:01:52:54:00:5c:cb:09}
	I0914 17:16:50.612223   33797 main.go:141] libmachine: (ha-929592) DBG | domain ha-929592 has defined IP address 192.168.39.54 and MAC address 52:54:00:5c:cb:09 in network mk-ha-929592
	I0914 17:16:50.612421   33797 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:16:50.616787   33797 kubeadm.go:883] updating cluster {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:16:50.616935   33797 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:16:50.616988   33797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:16:50.661040   33797 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:16:50.661062   33797 crio.go:433] Images already preloaded, skipping extraction
	I0914 17:16:50.661116   33797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:16:50.697632   33797 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:16:50.697654   33797 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:16:50.697662   33797 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.1 crio true true} ...
	I0914 17:16:50.697809   33797 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-929592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:16:50.697891   33797 ssh_runner.go:195] Run: crio config
	I0914 17:16:50.744713   33797 cni.go:84] Creating CNI manager for ""
	I0914 17:16:50.744734   33797 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0914 17:16:50.744749   33797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:16:50.744769   33797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-929592 NodeName:ha-929592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:16:50.744895   33797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-929592"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:16:50.744915   33797 kube-vip.go:115] generating kube-vip config ...
	I0914 17:16:50.744955   33797 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0914 17:16:50.756057   33797 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 17:16:50.756221   33797 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 17:16:50.756282   33797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:16:50.766084   33797 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:16:50.766256   33797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 17:16:50.775698   33797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0914 17:16:50.793878   33797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:16:50.810872   33797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0914 17:16:50.827707   33797 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 17:16:50.843961   33797 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0914 17:16:50.848904   33797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:16:51.000864   33797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:16:51.015424   33797 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592 for IP: 192.168.39.54
	I0914 17:16:51.015452   33797 certs.go:194] generating shared ca certs ...
	I0914 17:16:51.015468   33797 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:16:51.015647   33797 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:16:51.015694   33797 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:16:51.015705   33797 certs.go:256] generating profile certs ...
	I0914 17:16:51.015824   33797 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/client.key
	I0914 17:16:51.015857   33797 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3
	I0914 17:16:51.015871   33797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54 192.168.39.148 192.168.39.39 192.168.39.254]
	I0914 17:16:51.226810   33797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3 ...
	I0914 17:16:51.226840   33797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3: {Name:mk49551671edffb505318317557bb2d26c619ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:16:51.227032   33797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3 ...
	I0914 17:16:51.227047   33797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3: {Name:mkcec55d318b985531b1667f704cc2b12d9e93c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:16:51.227144   33797 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt.ffe1cdf3 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt
	I0914 17:16:51.227292   33797 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key.ffe1cdf3 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key
	I0914 17:16:51.227439   33797 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key
	I0914 17:16:51.227454   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:16:51.227466   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:16:51.227484   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:16:51.227497   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:16:51.227508   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:16:51.227520   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:16:51.227535   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:16:51.227547   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:16:51.227587   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:16:51.227617   33797 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:16:51.227627   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:16:51.227649   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:16:51.227678   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:16:51.227701   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:16:51.227736   33797 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:16:51.227768   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.227779   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.227788   33797 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.228324   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:16:51.254005   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:16:51.278983   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:16:51.302603   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:16:51.326139   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 17:16:51.349921   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:16:51.374639   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:16:51.399584   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/ha-929592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:16:51.424156   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:16:51.449012   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:16:51.474172   33797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:16:51.498590   33797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:16:51.514922   33797 ssh_runner.go:195] Run: openssl version
	I0914 17:16:51.520852   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:16:51.532308   33797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.536588   33797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.536643   33797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:16:51.542469   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:16:51.552925   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:16:51.563415   33797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.567698   33797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.567749   33797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:16:51.573285   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:16:51.582388   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:16:51.592689   33797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.596946   33797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.597008   33797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:16:51.602219   33797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:16:51.611369   33797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:16:51.615534   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 17:16:51.621039   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 17:16:51.626208   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 17:16:51.631302   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 17:16:51.636949   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 17:16:51.641942   33797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 17:16:51.647577   33797 kubeadm.go:392] StartCluster: {Name:ha-929592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-929592 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:16:51.647687   33797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:16:51.647740   33797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:16:51.684650   33797 cri.go:89] found id: "c502bdacde6a009b8e37ac816ac0d18a8c294173ed43571a08f6a3fb9872a029"
	I0914 17:16:51.684670   33797 cri.go:89] found id: "633f9a7a14ee23e2b2563bcf87fe984a400cda9e672a9b4139a99b35379778dc"
	I0914 17:16:51.684674   33797 cri.go:89] found id: "2043f3cb542985d356c0d6c975b5e4a1045314ef85fa2f34f938e81e0b7bcc5a"
	I0914 17:16:51.684677   33797 cri.go:89] found id: "bf42a0f089bcb4101b354ccb3043ff584fbe5acbcec991c7c6f00fbc21db5dd7"
	I0914 17:16:51.684680   33797 cri.go:89] found id: "9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17"
	I0914 17:16:51.684683   33797 cri.go:89] found id: "06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f"
	I0914 17:16:51.684685   33797 cri.go:89] found id: "fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931"
	I0914 17:16:51.684687   33797 cri.go:89] found id: "c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849"
	I0914 17:16:51.684689   33797 cri.go:89] found id: "7b409821346de2b42e8ebbff82396df9fc0d7ac3db8b76d586c5c80922f9c0b8"
	I0914 17:16:51.684695   33797 cri.go:89] found id: "ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a"
	I0914 17:16:51.684697   33797 cri.go:89] found id: "972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb"
	I0914 17:16:51.684703   33797 cri.go:89] found id: "ab1e607cdf424b9d9eb961bb3bd75bc16cb8b8f30e2c1fb579f52deb60857d00"
	I0914 17:16:51.684708   33797 cri.go:89] found id: "363e6bc276fd6311c477eb4e17cd12efc2e0822fb67680e1cef01b26c295126c"
	I0914 17:16:51.684711   33797 cri.go:89] found id: ""
	I0914 17:16:51.684747   33797 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.926754181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2ced478-c6ae-443c-ac85-78a87488b8d0 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.930296383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cea7b30-dd38-4e13-bfa5-7502eff230d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.931305422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334495931258666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cea7b30-dd38-4e13-bfa5-7502eff230d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.932279876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fd77c8b-7247-4955-be7d-7299b7810b32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.932381652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fd77c8b-7247-4955-be7d-7299b7810b32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.932869672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ab018b1b4c91075cb8514a5f1d910885be91963378596e37f676ad4b19ee4a2,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726334394081744344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42972572
0c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610
671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fd77c8b-7247-4955-be7d-7299b7810b32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.980923659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35e6b9a2-112e-4688-810d-9ea3ff28dadc name=/runtime.v1.RuntimeService/Version
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.981036590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35e6b9a2-112e-4688-810d-9ea3ff28dadc name=/runtime.v1.RuntimeService/Version
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.983053888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e953173d-caa0-49ba-8d52-c63f7d22b664 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.983780901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334495983738831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e953173d-caa0-49ba-8d52-c63f7d22b664 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.984420751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=636f5f02-70e2-4f6a-821d-b05d9fcef2c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.984518226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=636f5f02-70e2-4f6a-821d-b05d9fcef2c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:35 ha-929592 crio[3714]: time="2024-09-14 17:21:35.985160107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ab018b1b4c91075cb8514a5f1d910885be91963378596e37f676ad4b19ee4a2,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726334394081744344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42972572
0c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610
671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=636f5f02-70e2-4f6a-821d-b05d9fcef2c7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.035425802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61bd0c47-e201-4c8f-9e32-c44bfa8f62dd name=/runtime.v1.RuntimeService/Version
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.035516953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61bd0c47-e201-4c8f-9e32-c44bfa8f62dd name=/runtime.v1.RuntimeService/Version
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.036740438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eddd81c9-a9f4-405e-b9a4-65359af19357 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.037161599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334496037136976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eddd81c9-a9f4-405e-b9a4-65359af19357 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.037667171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc2a2079-b1f4-489f-bb49-290bae342452 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.037744266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc2a2079-b1f4-489f-bb49-290bae342452 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.038136013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ab018b1b4c91075cb8514a5f1d910885be91963378596e37f676ad4b19ee4a2,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726334394081744344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:df6e98168d3f80c6fa7f00ec963667bbd59e85b054d057fc873161c0b811c690,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726334219127722524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42972572
0c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726334217989304764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610
671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726334217838311043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34c6ad67896f3d9a7c0b0b60a5733acdf8c55ff8eae5d674e973f29c8f92f81a,PodSandboxId:e605a9e0100e5d708dfcb136f97211271e8f47299b54252e343df013be857601,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726333690210167885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17,PodSandboxId:69d86428b72f0c107b00bf1918d30f35a1c7612dbe8c1cc199ffcf24b11fb6b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546846726317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f,PodSandboxId:9b615a9a43e5968869cd39a175e3298d58f96b3d25f4c6e821a1c05d0e960e2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726333546840132895,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931,PodSandboxId:fc9e9c48c04bef982ef404b9c77cb73c2aecd644824a80cd95ac86a749ae2de2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726333535088419023,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849,PodSandboxId:de29821ef5ba3c9a10a8426ece9a602feabbc039c6332faaf6599d1ac2fd5ecd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726333534777571877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb,PodSandboxId:dbb138fdd1472faa7f5001e471dfabc3b867c54c89ceeca12f2aa61eff79f487,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726333523910309461,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a,PodSandboxId:282b521b3dea81924246e09b66f30e12327d4a1bccd61d21819ea1716b126a09,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726333523925855261,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc2a2079-b1f4-489f-bb49-290bae342452 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.063513343Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b9f5843-beaa-4567-a553-93003ca9c1df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.063851722Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-49mwg,Uid:9f3ed79c-66ac-429d-bbd6-4956eab3be98,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334251209478479,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:08:06.494431652Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-929592,Uid:517e581b944b0c79eed2314533ce0ca8,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726334232106323877,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{kubernetes.io/config.hash: 517e581b944b0c79eed2314533ce0ca8,kubernetes.io/config.seen: 2024-09-14T17:16:50.808417264Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-66txm,Uid:abf3ed52-ab5a-4415-a8a9-78e567d60348,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217569801244,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-14T17:05:46.281501018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpdz4,Uid:2a751c8d-890c-402e-846f-8f61e3fd1965,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217566817117,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:46.296104407Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-929592,Uid:a3520d0a4b75398d9e9e72bfdcfc4f4f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217554332779,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.54:8443,kubernetes.io/config.hash: a3520d0a4b75398d9e9e72bfdcfc4f4f,kubernetes.io/config.seen: 2024-09-14T17:05:30.022819604Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&PodSandboxMetadata{Name:etcd-ha-929592,Uid:d7c84dd075d4f7e4fd5febc189940f4e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217552129689,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,tier: control-plane,},Annotations:map[string]st
ring{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.54:2379,kubernetes.io/config.hash: d7c84dd075d4f7e4fd5febc189940f4e,kubernetes.io/config.seen: 2024-09-14T17:05:30.022818238Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-929592,Uid:21e24f7df5d7099b0f0b2dba49446d51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217552050934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 21e24f7df5d7099b0f0b2dba49446d51,kubernetes.io/config.seen: 2024-09-14T17:05:30.022820563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b99429327f50f
00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&PodSandboxMetadata{Name:kube-proxy-6zqmd,Uid:b7beddc8-ce6a-44ed-b3e8-423baf620bbb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217510305950,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:34.218501479Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4f486484-9641-4e23-8bc9-4dcae57b621a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217506311982,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T17:05:46.291789176Z,kubernetes.io/config.source: api,},RuntimeHand
ler:,},&PodSandbox{Id:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&PodSandboxMetadata{Name:kindnet-fw757,Uid:51a38d95-fd50-4c05-a75d-a3dfeae127bd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217466306171,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:05:34.227338304Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-929592,Uid:95065ad67a4f1610671e72fcaed57954,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726334217464185126,Labels:map[string]string{component: kube-scheduler,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad67a4f1610671e72fcaed57954,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95065ad67a4f1610671e72fcaed57954,kubernetes.io/config.seen: 2024-09-14T17:05:30.022812287Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0b9f5843-beaa-4567-a553-93003ca9c1df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.064949659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=870e996d-c30a-474d-bed6-bc732160a7e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.065028526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=870e996d-c30a-474d-bed6-bc732160a7e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:21:36 ha-929592 crio[3714]: time="2024-09-14 17:21:36.065263922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ab018b1b4c91075cb8514a5f1d910885be91963378596e37f676ad4b19ee4a2,PodSandboxId:0cc8e7f5a7b7afb6fba3f65631241eb133a3fd4efa559f490732ecfbfad440e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726334394081744344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f486484-9641-4e23-8bc9-4dcae57b621a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e3dd648bccf3344a86805e6a12abe9113ff924a52d51dba22a7dd0a72c0df48,PodSandboxId:b273a10472206d3e61466d817dbc2082e74b8a07e53c8e46d8e08c47165c44a3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726334251735423940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-49mwg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f3ed79c-66ac-429d-bbd6-4956eab3be98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803,PodSandboxId:6e984099257ad2ed58634d13a332c6b8e51aa221520d7f49c092a55ad2a59f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726334250429723078,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e24f7df5d7099b0f0b2dba49446d51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda,PodSandboxId:6d35c1613fd15e5d6eeebc06360e4dc3c0a083b150c4583027993addfe1753f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726334249545742486,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3520d0a4b75398d9e9e72bfdcfc4f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59ca6347386b44e1dec1fe951406a82aab23a83a85a9728f4d7a15c9fb99c528,PodSandboxId:bc3029e83ceac1792089510c06c95489b820d85fa0fa6902f88b5a61b0fe4dbd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726334232225179139,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 517e581b944b0c79eed2314533ce0ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1,PodSandboxId:b99429327f50f00b60175ace0289cb0d74aa0deada649b05989a232a2941b070,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726334219339779168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6zqmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7beddc8-ce6a-44ed-b3e8-423baf620bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431,PodSandboxId:928ea8de33905030650eec466f93285921f446dda71bb2c17462bfcc260ac207,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726334218203547585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fw757,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a38d95-fd50-4c05-a75d-a3dfeae127bd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd,PodSandboxId:4564189eeba3b81e291de82a9ba45090a53935ef617393b174ecc86513ac4f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218089697202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpdz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a751c8d-890c-402e-846f-8f61e3fd1965,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd,PodSandboxId:238a7746658f1c6d05de966e4253d7cb775bb460fb2a75a60f61f847ce29cad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726334218088047377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-66txm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf3ed52-ab5a-4415-a8a9-78e567d60348,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559,PodSandboxId:0edfbfa01ecb59b2373e0bba14228824cbb764ac1eeb467afc47561af1907ec3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726334218010369299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7c84dd075d4f7e4fd5febc189940f4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e,PodSandboxId:df2a40e486b685a3c47cb8eb4aebce2f03d8bea33f9b5219903618fa40c5866b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726334217781276748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-929592,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95065ad6
7a4f1610671e72fcaed57954,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=870e996d-c30a-474d-bed6-bc732160a7e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9ab018b1b4c91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       6                   0cc8e7f5a7b7a       storage-provisioner
	1e3dd648bccf3       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago        Running             busybox                   1                   b273a10472206       busybox-7dff88458-49mwg
	876c20b82135d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago        Running             kube-controller-manager   2                   6e984099257ad       kube-controller-manager-ha-929592
	451a416ccbf4e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago        Running             kube-apiserver            3                   6d35c1613fd15       kube-apiserver-ha-929592
	59ca6347386b4       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago        Running             kube-vip                  0                   bc3029e83ceac       kube-vip-ha-929592
	ab7a7b73a44c2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago        Running             kube-proxy                1                   b99429327f50f       kube-proxy-6zqmd
	df6e98168d3f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago        Exited              storage-provisioner       5                   0cc8e7f5a7b7a       storage-provisioner
	f4b6294601181       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago        Running             kindnet-cni               1                   928ea8de33905       kindnet-fw757
	429725720c04d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago        Running             coredns                   1                   4564189eeba3b       coredns-7c65d6cfc9-dpdz4
	b5ab23c2f1279       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago        Running             coredns                   1                   238a7746658f1       coredns-7c65d6cfc9-66txm
	b941bc429a5fd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago        Running             etcd                      1                   0edfbfa01ecb5       etcd-ha-929592
	7d7a8daefb0ea       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago        Exited              kube-controller-manager   1                   6e984099257ad       kube-controller-manager-ha-929592
	a61a19ee550f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago        Exited              kube-apiserver            2                   6d35c1613fd15       kube-apiserver-ha-929592
	8e96c2e442fde       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago        Running             kube-scheduler            1                   df2a40e486b68       kube-scheduler-ha-929592
	34c6ad67896f3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago       Exited              busybox                   0                   e605a9e0100e5       busybox-7dff88458-49mwg
	9eb824a3acd10       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago       Exited              coredns                   0                   69d86428b72f0       coredns-7c65d6cfc9-dpdz4
	06ffbf30c8c13       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago       Exited              coredns                   0                   9b615a9a43e59       coredns-7c65d6cfc9-66txm
	fd34a54170b25       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago       Exited              kindnet-cni               0                   fc9e9c48c04be       kindnet-fw757
	c1571fb1d1d1f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago       Exited              kube-proxy                0                   de29821ef5ba3       kube-proxy-6zqmd
	ac425bd016fb1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago       Exited              etcd                      0                   282b521b3dea8       etcd-ha-929592
	972f797d73554       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago       Exited              kube-scheduler            0                   dbb138fdd1472       kube-scheduler-ha-929592
	
	
	==> coredns [06ffbf30c8c13ffdba9a832583a4629b5a821662986c777e9cca57100ed3fd9f] <==
	[INFO] 10.244.0.4:42742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196447s
	[INFO] 10.244.2.2:34834 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000264331s
	[INFO] 10.244.2.2:59462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156407s
	[INFO] 10.244.2.2:42619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326596s
	[INFO] 10.244.2.2:44804 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179359s
	[INFO] 10.244.2.2:41911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132469s
	[INFO] 10.244.2.2:33102 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102993s
	[INFO] 10.244.1.2:55754 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139996s
	[INFO] 10.244.1.2:43056 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122452s
	[INFO] 10.244.1.2:48145 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077043s
	[INFO] 10.244.0.4:52337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165468s
	[INFO] 10.244.0.4:42536 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091889s
	[INFO] 10.244.0.4:44365 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064388s
	[INFO] 10.244.2.2:55168 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124822s
	[INFO] 10.244.0.4:38549 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137185s
	[INFO] 10.244.0.4:50003 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132872s
	[INFO] 10.244.2.2:52393 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098256s
	[INFO] 10.244.2.2:57699 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088711s
	[INFO] 10.244.1.2:46863 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018617s
	[INFO] 10.244.1.2:35487 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119162s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1956&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1967&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1977&timeout=8m31s&timeoutSeconds=511&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [429725720c04d774b8dd66b69992ef334c86360f500219e860192266a0d355bd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[737210940]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:02.812) (total time: 10001ms):
	Trace[737210940]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:12.814)
	Trace[737210940]: [10.001974656s] [10.001974656s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1835578781]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:02.939) (total time: 10001ms):
	Trace[1835578781]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:12.941)
	Trace[1835578781]: [10.001397415s] [10.001397415s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35722->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35722->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35704->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35704->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35710->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9eb824a3acd106dd672eb9b88186825642d229cad673bfd46e35ff45d82c0e17] <==
	[INFO] 10.244.0.4:59604 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010094s
	[INFO] 10.244.2.2:44822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134857s
	[INFO] 10.244.2.2:33999 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00156764s
	[INFO] 10.244.1.2:33236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120988s
	[INFO] 10.244.1.2:56330 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001720435s
	[INFO] 10.244.1.2:55436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009185s
	[INFO] 10.244.1.2:57342 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009326s
	[INFO] 10.244.1.2:54076 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109267s
	[INFO] 10.244.0.4:39214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088174s
	[INFO] 10.244.2.2:52535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132429s
	[INFO] 10.244.2.2:57308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131665s
	[INFO] 10.244.2.2:55789 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060892s
	[INFO] 10.244.1.2:51494 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124082s
	[INFO] 10.244.1.2:52382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000214777s
	[INFO] 10.244.1.2:43073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088643s
	[INFO] 10.244.1.2:44985 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084521s
	[INFO] 10.244.0.4:58067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132438s
	[INFO] 10.244.0.4:49916 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000488329s
	[INFO] 10.244.2.2:49651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189629s
	[INFO] 10.244.2.2:55778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106781s
	[INFO] 10.244.1.2:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160687s
	[INFO] 10.244.1.2:44082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162642s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1967&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b5ab23c2f12798f997dfd6f5b6ff3d84296f2731909b2b10adf2092755601fdd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[184978772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:00.208) (total time: 10001ms):
	Trace[184978772]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:10.209)
	Trace[184978772]: [10.001175025s] [10.001175025s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1477600604]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 17:17:02.599) (total time: 10001ms):
	Trace[1477600604]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:17:12.600)
	Trace[1477600604]: [10.001678587s] [10.001678587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36454->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36454->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:36460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-929592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_05_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:05:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:21:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:20:09 +0000   Sat, 14 Sep 2024 17:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:20:09 +0000   Sat, 14 Sep 2024 17:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:20:09 +0000   Sat, 14 Sep 2024 17:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:20:09 +0000   Sat, 14 Sep 2024 17:20:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-929592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca5487ccf56549d9a2987da2958ebdfe
	  System UUID:                ca5487cc-f565-49d9-a298-7da2958ebdfe
	  Boot ID:                    b416a941-f6c5-4da6-ab3c-4ac7463bcedd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-49mwg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-66txm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-dpdz4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-929592                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-fw757                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-929592             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-929592    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6zqmd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-929592             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-929592                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 3m52s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Warning  ContainerGCFailed        5m6s (x2 over 6m6s)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             4m57s (x3 over 5m46s)  kubelet          Node ha-929592 status is now: NodeNotReady
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-929592 event: Registered Node ha-929592 in Controller
	  Normal   NodeNotReady             106s                   node-controller  Node ha-929592 status is now: NodeNotReady
	  Normal   NodeReady                87s (x2 over 15m)      kubelet          Node ha-929592 status is now: NodeReady
	  Normal   NodeHasSufficientPID     87s (x2 over 16m)      kubelet          Node ha-929592 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    87s (x2 over 16m)      kubelet          Node ha-929592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  87s (x2 over 16m)      kubelet          Node ha-929592 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-929592-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_06_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:06:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:21:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:19:57 +0000   Sat, 14 Sep 2024 17:19:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:19:57 +0000   Sat, 14 Sep 2024 17:19:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:19:57 +0000   Sat, 14 Sep 2024 17:19:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:19:57 +0000   Sat, 14 Sep 2024 17:19:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-929592-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba17c21a65b42848fb2de3d914ef47e
	  System UUID:                bba17c21-a65b-4284-8fb2-de3d914ef47e
	  Boot ID:                    0a772f2d-56c8-463a-a563-f23ec15ee87f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kvmx7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-929592-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-tnjsl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-929592-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-929592-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-bcfkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-929592-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-929592-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     15m                    cidrAllocator    Node ha-929592-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-929592-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-929592-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-929592-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-929592-m02 status is now: NodeNotReady
	  Normal  Starting                 4m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node ha-929592-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node ha-929592-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-929592-m02 event: Registered Node ha-929592-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-929592-m02 status is now: NodeNotReady
	
	
	Name:               ha-929592-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-929592-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=ha-929592
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_08_41_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:08:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-929592-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:19:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 17:18:49 +0000   Sat, 14 Sep 2024 17:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-929592-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b38c12dc6ad945c88a69c031beae5593
	  System UUID:                b38c12dc-6ad9-45c8-8a69-c031beae5593
	  Boot ID:                    b95e0ff1-0fb1-43fb-8ad9-7ae34c9be1e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wf9qz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-x76g8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-l7g8d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-929592-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-929592-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-929592-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-929592-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           12m                    node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-929592-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   NodeNotReady             3m23s                  node-controller  Node ha-929592-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-929592-m04 event: Registered Node ha-929592-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-929592-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-929592-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-929592-m04 has been rebooted, boot id: b95e0ff1-0fb1-43fb-8ad9-7ae34c9be1e5
	  Normal   NodeReady                2m47s                  kubelet          Node ha-929592-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-929592-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055031] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061916] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.180150] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.131339] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.280240] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +3.763196] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +3.977772] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.069092] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.951305] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.081826] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.069011] kauditd_printk_skb: 28 callbacks suppressed
	[ +11.752479] kauditd_printk_skb: 31 callbacks suppressed
	[Sep14 17:06] kauditd_printk_skb: 24 callbacks suppressed
	[Sep14 17:13] kauditd_printk_skb: 1 callbacks suppressed
	[Sep14 17:16] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +0.155122] systemd-fstab-generator[3652]: Ignoring "noauto" option for root device
	[  +0.181142] systemd-fstab-generator[3666]: Ignoring "noauto" option for root device
	[  +0.141257] systemd-fstab-generator[3678]: Ignoring "noauto" option for root device
	[  +0.294661] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +7.262565] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.086720] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.527882] kauditd_printk_skb: 12 callbacks suppressed
	[Sep14 17:17] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.057912] kauditd_printk_skb: 1 callbacks suppressed
	[ +23.971326] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [ac425bd016fb1dd04caf8514eda430f13287990b5a6398599221210bb254390a] <==
	{"level":"warn","ts":"2024-09-14T17:15:11.603518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T17:15:10.780672Z","time spent":"822.836675ms","remote":"127.0.0.1:56378","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	2024/09/14 17:15:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-14T17:15:11.638832Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:15:11.638957Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T17:15:11.639156Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"731f5c40d4af6217","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-14T17:15:11.639557Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.639778Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.639872Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640046Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640142Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640201Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640223Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5fb5e21af24b18aa"}
	{"level":"info","ts":"2024-09-14T17:15:11.640231Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640244Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640273Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640366Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640407Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.640431Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:15:11.643890Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"warn","ts":"2024-09-14T17:15:11.643941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.889179795s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-14T17:15:11.643990Z","caller":"traceutil/trace.go:171","msg":"trace[1102897918] range","detail":"{range_begin:; range_end:; }","duration":"8.889248444s","start":"2024-09-14T17:15:02.754732Z","end":"2024-09-14T17:15:11.643981Z","steps":["trace[1102897918] 'agreement among raft nodes before linearized reading'  (duration: 8.889177249s)"],"step_count":1}
	{"level":"error","ts":"2024-09-14T17:15:11.644040Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-14T17:15:11.644155Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-09-14T17:15:11.644663Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-929592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> etcd [b941bc429a5fde67708b36dc7f2b22c492e47a8748c222c948b2d663c89d4559] <==
	{"level":"info","ts":"2024-09-14T17:18:10.793723Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.805876Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.816265Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"731f5c40d4af6217","to":"f7b50c386fd91100","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-14T17:18:10.816333Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:10.816817Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"731f5c40d4af6217","to":"f7b50c386fd91100","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-14T17:18:10.816884Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:18:17.222959Z","caller":"traceutil/trace.go:171","msg":"trace[2049881545] transaction","detail":"{read_only:false; response_revision:2479; number_of_response:1; }","duration":"111.548732ms","start":"2024-09-14T17:18:17.111384Z","end":"2024-09-14T17:18:17.222933Z","steps":["trace[2049881545] 'process raft request'  (duration: 111.411244ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:18:25.880197Z","caller":"traceutil/trace.go:171","msg":"trace[629286354] transaction","detail":"{read_only:false; response_revision:2519; number_of_response:1; }","duration":"109.843563ms","start":"2024-09-14T17:18:25.770337Z","end":"2024-09-14T17:18:25.880181Z","steps":["trace[629286354] 'process raft request'  (duration: 109.746079ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:19:02.516517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 switched to configuration voters=(6896667009749817514 8295450472155669015)"}
	{"level":"info","ts":"2024-09-14T17:19:02.518945Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","removed-remote-peer-id":"f7b50c386fd91100","removed-remote-peer-urls":["https://192.168.39.39:2380"]}
	{"level":"info","ts":"2024-09-14T17:19:02.519011Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f7b50c386fd91100"}
	{"level":"warn","ts":"2024-09-14T17:19:02.519194Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:19:02.519229Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f7b50c386fd91100"}
	{"level":"warn","ts":"2024-09-14T17:19:02.519304Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:19:02.519317Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:19:02.519359Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"warn","ts":"2024-09-14T17:19:02.519694Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100","error":"context canceled"}
	{"level":"warn","ts":"2024-09-14T17:19:02.519764Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f7b50c386fd91100","error":"failed to read f7b50c386fd91100 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-14T17:19:02.519801Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"warn","ts":"2024-09-14T17:19:02.520033Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100","error":"context canceled"}
	{"level":"info","ts":"2024-09-14T17:19:02.520079Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"731f5c40d4af6217","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:19:02.520097Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f7b50c386fd91100"}
	{"level":"info","ts":"2024-09-14T17:19:02.520118Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"731f5c40d4af6217","removed-remote-peer-id":"f7b50c386fd91100"}
	{"level":"warn","ts":"2024-09-14T17:19:02.530912Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"731f5c40d4af6217","remote-peer-id-stream-handler":"731f5c40d4af6217","remote-peer-id-from":"f7b50c386fd91100"}
	{"level":"warn","ts":"2024-09-14T17:19:02.542910Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.39:43152","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:21:36 up 16 min,  0 users,  load average: 0.63, 0.54, 0.37
	Linux ha-929592 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f4b6294601181df2221b3b2a9952e0864fbef7e69634d02dace316759c43e431] <==
	I0914 17:20:49.434169       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:20:59.430703       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:20:59.430824       1 main.go:299] handling current node
	I0914 17:20:59.430856       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:20:59.430874       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:20:59.431045       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:20:59.431070       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:21:09.438843       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:21:09.438957       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:21:09.439090       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:21:09.439121       1 main.go:299] handling current node
	I0914 17:21:09.439146       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:21:09.439164       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:21:19.430779       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:21:19.430848       1 main.go:299] handling current node
	I0914 17:21:19.430870       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:21:19.430878       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:21:19.431080       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:21:19.431109       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:21:29.433996       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:21:29.434038       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:21:29.434183       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:21:29.434200       1 main.go:299] handling current node
	I0914 17:21:29.434212       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:21:29.434217       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fd34a54170b251f984dd373097036615e33493d9c7c91970296beedc2d507931] <==
	I0914 17:14:46.123684       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:14:46.123803       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:14:46.123960       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:14:46.123985       1 main.go:299] handling current node
	I0914 17:14:46.124006       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:14:46.124021       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:14:46.124103       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:14:46.124121       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:14:56.127429       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:14:56.127550       1 main.go:299] handling current node
	I0914 17:14:56.127632       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:14:56.127659       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	I0914 17:14:56.127866       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:14:56.127918       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:14:56.128035       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:14:56.128075       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	E0914 17:14:58.847174       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2016&timeout=6m26s&timeoutSeconds=386&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0914 17:15:06.127815       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0914 17:15:06.127939       1 main.go:322] Node ha-929592-m03 has CIDR [10.244.2.0/24] 
	I0914 17:15:06.128135       1 main.go:295] Handling node with IPs: map[192.168.39.51:{}]
	I0914 17:15:06.128164       1 main.go:322] Node ha-929592-m04 has CIDR [10.244.3.0/24] 
	I0914 17:15:06.128226       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0914 17:15:06.128245       1 main.go:299] handling current node
	I0914 17:15:06.128296       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0914 17:15:06.128314       1 main.go:322] Node ha-929592-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [451a416ccbf4edb1f2ee529934698e4d7d06257670bfa83420a9afba6589ffda] <==
	I0914 17:17:32.005761       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0914 17:17:32.151777       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:17:32.151819       1 policy_source.go:224] refreshing policies
	I0914 17:17:32.201690       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 17:17:32.201870       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 17:17:32.201897       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 17:17:32.202396       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 17:17:32.202434       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 17:17:32.203324       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 17:17:32.205257       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 17:17:32.205402       1 aggregator.go:171] initial CRD sync complete...
	I0914 17:17:32.205487       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 17:17:32.205518       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 17:17:32.205551       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:17:32.207057       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 17:17:32.207096       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0914 17:17:32.219141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.39]
	I0914 17:17:32.222109       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 17:17:32.231642       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0914 17:17:32.236955       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0914 17:17:32.247881       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 17:17:32.262240       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 17:17:33.010023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 17:17:33.355807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.39 192.168.39.54]
	W0914 17:19:13.360989       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.54]
	
	
	==> kube-apiserver [a61a19ee550f00c2e9ec2f6c3c2858f016509e649725e9030ffe238270c99ca7] <==
	I0914 17:16:58.629003       1 options.go:228] external host was not specified, using 192.168.39.54
	I0914 17:16:58.643250       1 server.go:142] Version: v1.31.1
	I0914 17:16:58.643313       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:16:59.138915       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0914 17:16:59.171144       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:16:59.177893       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0914 17:16:59.178737       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0914 17:16:59.179040       1 instance.go:232] Using reconciler: lease
	W0914 17:17:19.132807       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0914 17:17:19.133691       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0914 17:17:19.180685       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [7d7a8daefb0eaadaa969614a32c02514d6e1cc779d7c3c9e31540c61053fa965] <==
	I0914 17:16:59.457436       1 serving.go:386] Generated self-signed cert in-memory
	I0914 17:16:59.975746       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 17:16:59.975838       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:16:59.977668       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 17:16:59.977910       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 17:16:59.978418       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 17:16:59.978467       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 17:17:20.188675       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.54:8443/healthz\": dial tcp 192.168.39.54:8443: connect: connection refused"
	
	
	==> kube-controller-manager [876c20b82135dc9e3c36bbd198419de043cb1ef47c203583e228ea1289377803] <==
	I0914 17:19:50.877333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="23.328257ms"
	I0914 17:19:50.879015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="223.162µs"
	I0914 17:19:50.918969       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0914 17:19:50.952561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.725988ms"
	I0914 17:19:50.952870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.59µs"
	I0914 17:19:53.170820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.758007ms"
	I0914 17:19:53.170950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.451µs"
	I0914 17:19:54.212002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:19:57.111533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:19:57.126669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m02"
	I0914 17:20:00.920765       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0914 17:20:09.205523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592"
	I0914 17:20:09.221176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592"
	I0914 17:20:09.281225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	I0914 17:20:10.237979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.152789ms"
	I0914 17:20:10.238211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="90.613µs"
	I0914 17:20:10.279158       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-cfz7c\": the object has been modified; please apply your changes to the latest version and try again"
	I0914 17:20:10.279555       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"77c1f8c5-54e6-464a-975a-aa4d8c587d77", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-cfz7c": the object has been modified; please apply your changes to the latest version and try again
	I0914 17:20:10.280911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.851673ms"
	I0914 17:20:10.283951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="104.718µs"
	I0914 17:20:10.324324       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-cfz7c\": the object has been modified; please apply your changes to the latest version and try again"
	I0914 17:20:10.324435       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"77c1f8c5-54e6-464a-975a-aa4d8c587d77", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-cfz7c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-cfz7c": the object has been modified; please apply your changes to the latest version and try again
	I0914 17:20:10.334949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.720006ms"
	I0914 17:20:10.335175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="168.484µs"
	I0914 17:20:10.940016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-929592-m04"
	
	
	==> kube-proxy [ab7a7b73a44c28a6353fc7334491855caa043bee9b4c0d4d190f7e0edc2cf7d1] <==
	E0914 17:17:02.880632       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:05.951737       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:09.024281       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:15.168772       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0914 17:17:24.383143       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-929592\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0914 17:17:43.491896       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	E0914 17:17:43.492032       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:17:43.525047       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:17:43.525110       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:17:43.525142       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:17:43.527204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:17:43.527527       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:17:43.527557       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:17:43.530062       1 config.go:199] "Starting service config controller"
	I0914 17:17:43.530107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:17:43.530140       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:17:43.530156       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:17:43.533483       1 config.go:328] "Starting node config controller"
	I0914 17:17:43.533533       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:17:43.630278       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:17:43.630384       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:17:43.634198       1 shared_informer.go:320] Caches are synced for node config
	W0914 17:19:58.389373       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0914 17:19:58.389501       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0914 17:19:58.389529       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [c1571fb1d1d1f39fe29d5063d77b72dcce6a459a4bf16a25bbc24fd1a9c53849] <==
	E0914 17:13:58.559831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:13:58.559891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:13:58.559932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:04.703968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:04.704081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:04.704454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:04.704650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:04.704753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:04.704810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:13.919996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:13.920069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:13.920099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:13.920124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:16.992435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:16.992926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:32.351451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:32.351936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:35.424275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:35.424521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:41.568495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:41.568688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-929592&resourceVersion=2002\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:14:59.999214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:14:59.999559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0914 17:15:06.144453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945": dial tcp 192.168.39.254:8443: connect: no route to host
	E0914 17:15:06.144535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1945\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [8e96c2e442fde740472da39a62a3d82c91995eed86608662cb709d81b508a09e] <==
	E0914 17:17:27.942250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:28.317534       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.54:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:28.317639       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.54:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:28.426963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:28.427017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.026659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.54:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.026711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.54:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.048530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.54:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.048712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.54:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.191539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.191716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.448797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.448916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:29.559423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.54:8443: connect: connection refused
	E0914 17:17:29.559503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.54:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.54:8443: connect: connection refused" logger="UnhandledError"
	W0914 17:17:32.022266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 17:17:32.022321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:17:32.022433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 17:17:32.022463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:17:32.022515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 17:17:32.022542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0914 17:17:37.402904       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 17:18:59.227055       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf9qz\": pod busybox-7dff88458-wf9qz is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wf9qz" node="ha-929592-m04"
	E0914 17:18:59.228954       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf9qz\": pod busybox-7dff88458-wf9qz is already assigned to node \"ha-929592-m04\"" pod="default/busybox-7dff88458-wf9qz"
	I0914 17:18:59.229509       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wf9qz" node="ha-929592-m04"
	
	
	==> kube-scheduler [972f797d7355465fa2cd15940a0b34684d83e7ad453b631aa9e06639881c09fb] <==
	E0914 17:08:42.973360       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:42.977406       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ae77fbbd-0eba-4e1d-add0-d894e73795c1(kube-system/kube-proxy-ll6r9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ll6r9"
	E0914 17:08:42.977758       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ll6r9\": pod kube-proxy-ll6r9 is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-ll6r9"
	I0914 17:08:42.977890       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ll6r9" node="ha-929592-m04"
	E0914 17:08:44.830679       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lrzhr" node="ha-929592-m04"
	E0914 17:08:44.830996       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lrzhr\": pod kube-proxy-lrzhr is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-lrzhr"
	E0914 17:08:44.831750       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 858b1075-344d-4b2d-baed-8eea46a2f708(kube-system/kube-proxy-thwhv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-thwhv"
	E0914 17:08:44.837157       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-thwhv\": pod kube-proxy-thwhv is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-thwhv"
	I0914 17:08:44.837232       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-thwhv" node="ha-929592-m04"
	E0914 17:08:44.837022       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	E0914 17:08:44.839305       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bdb91643-a0e4-4162-aeb3-0d94749f04df(kube-system/kube-proxy-l7g8d) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-l7g8d"
	E0914 17:08:44.839486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l7g8d\": pod kube-proxy-l7g8d is already assigned to node \"ha-929592-m04\"" pod="kube-system/kube-proxy-l7g8d"
	I0914 17:08:44.839536       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l7g8d" node="ha-929592-m04"
	E0914 17:15:04.055830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0914 17:15:05.672559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0914 17:15:06.570552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0914 17:15:06.985854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0914 17:15:07.641551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0914 17:15:08.241189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0914 17:15:08.759986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0914 17:15:09.957710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0914 17:15:10.100388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0914 17:15:10.526092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0914 17:15:11.550061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 14 17:20:20 ha-929592 kubelet[1305]: E0914 17:20:20.309744    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334420308735961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:20:30 ha-929592 kubelet[1305]: E0914 17:20:30.081742    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:20:30 ha-929592 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:20:30 ha-929592 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:20:30 ha-929592 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:20:30 ha-929592 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:20:30 ha-929592 kubelet[1305]: E0914 17:20:30.312020    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334430311759156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:20:30 ha-929592 kubelet[1305]: E0914 17:20:30.312061    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334430311759156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:20:40 ha-929592 kubelet[1305]: E0914 17:20:40.314694    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334440314302126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:20:40 ha-929592 kubelet[1305]: E0914 17:20:40.314726    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334440314302126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:20:50 ha-929592 kubelet[1305]: E0914 17:20:50.317443    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334450317056325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:20:50 ha-929592 kubelet[1305]: E0914 17:20:50.317470    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334450317056325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:00 ha-929592 kubelet[1305]: E0914 17:21:00.319877    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334460319494147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:00 ha-929592 kubelet[1305]: E0914 17:21:00.319945    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334460319494147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:10 ha-929592 kubelet[1305]: E0914 17:21:10.321418    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334470320979964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:10 ha-929592 kubelet[1305]: E0914 17:21:10.321798    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334470320979964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:20 ha-929592 kubelet[1305]: E0914 17:21:20.324118    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334480323311181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:20 ha-929592 kubelet[1305]: E0914 17:21:20.324164    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334480323311181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:30 ha-929592 kubelet[1305]: E0914 17:21:30.080555    1305 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:21:30 ha-929592 kubelet[1305]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:21:30 ha-929592 kubelet[1305]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:21:30 ha-929592 kubelet[1305]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:21:30 ha-929592 kubelet[1305]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:21:30 ha-929592 kubelet[1305]: E0914 17:21:30.325488    1305 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334490325224702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:21:30 ha-929592 kubelet[1305]: E0914 17:21:30.325516    1305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726334490325224702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:21:35.593251   36516 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19643-8806/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-929592 -n ha-929592
helpers_test.go:261: (dbg) Run:  kubectl --context ha-929592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396884
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-396884
E0914 17:36:45.626066   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-396884: exit status 82 (2m1.798193754s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-396884-m03"  ...
	* Stopping node "multinode-396884-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-396884" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396884 --wait=true -v=8 --alsologtostderr
E0914 17:39:04.947027   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396884 --wait=true -v=8 --alsologtostderr: (3m19.832859979s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396884
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-396884 -n multinode-396884
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-396884 logs -n 25: (1.48683304s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3813016810/001/cp-test_multinode-396884-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884:/home/docker/cp-test_multinode-396884-m02_multinode-396884.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884 sudo cat                                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m02_multinode-396884.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03:/home/docker/cp-test_multinode-396884-m02_multinode-396884-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884-m03 sudo cat                                   | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m02_multinode-396884-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp testdata/cp-test.txt                                                | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3813016810/001/cp-test_multinode-396884-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884:/home/docker/cp-test_multinode-396884-m03_multinode-396884.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884 sudo cat                                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m03_multinode-396884.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02:/home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884-m02 sudo cat                                   | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-396884 node stop m03                                                          | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	| node    | multinode-396884 node start                                                             | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	| stop    | -p multinode-396884                                                                     | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	| start   | -p multinode-396884                                                                     | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:38 UTC | 14 Sep 24 17:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:38:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:38:07.838462   45790 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:38:07.838603   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:38:07.838612   45790 out.go:358] Setting ErrFile to fd 2...
	I0914 17:38:07.838618   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:38:07.838812   45790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:38:07.839410   45790 out.go:352] Setting JSON to false
	I0914 17:38:07.840312   45790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4832,"bootTime":1726330656,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:38:07.840412   45790 start.go:139] virtualization: kvm guest
	I0914 17:38:07.842624   45790 out.go:177] * [multinode-396884] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:38:07.843996   45790 notify.go:220] Checking for updates...
	I0914 17:38:07.844005   45790 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:38:07.845513   45790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:38:07.847624   45790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:38:07.849112   45790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:38:07.850621   45790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:38:07.852360   45790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:38:07.854027   45790 config.go:182] Loaded profile config "multinode-396884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:38:07.854176   45790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:38:07.854687   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:38:07.854737   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:38:07.870940   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0914 17:38:07.871492   45790 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:38:07.872381   45790 main.go:141] libmachine: Using API Version  1
	I0914 17:38:07.872401   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:38:07.872881   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:38:07.873136   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:38:07.909711   45790 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 17:38:07.911131   45790 start.go:297] selected driver: kvm2
	I0914 17:38:07.911148   45790 start.go:901] validating driver "kvm2" against &{Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:38:07.911382   45790 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:38:07.911896   45790 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:38:07.912009   45790 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:38:07.927317   45790 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:38:07.928050   45790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:38:07.928097   45790 cni.go:84] Creating CNI manager for ""
	I0914 17:38:07.928155   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 17:38:07.928233   45790 start.go:340] cluster config:
	{Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-396884 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:38:07.928391   45790 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:38:07.930510   45790 out.go:177] * Starting "multinode-396884" primary control-plane node in "multinode-396884" cluster
	I0914 17:38:07.931777   45790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:38:07.931826   45790 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 17:38:07.931839   45790 cache.go:56] Caching tarball of preloaded images
	I0914 17:38:07.931927   45790 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:38:07.931940   45790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:38:07.932070   45790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/config.json ...
	I0914 17:38:07.932273   45790 start.go:360] acquireMachinesLock for multinode-396884: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:38:07.932335   45790 start.go:364] duration metric: took 34.328µs to acquireMachinesLock for "multinode-396884"
	I0914 17:38:07.932353   45790 start.go:96] Skipping create...Using existing machine configuration
	I0914 17:38:07.932362   45790 fix.go:54] fixHost starting: 
	I0914 17:38:07.932619   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:38:07.932650   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:38:07.947920   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0914 17:38:07.948374   45790 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:38:07.948874   45790 main.go:141] libmachine: Using API Version  1
	I0914 17:38:07.948888   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:38:07.949189   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:38:07.949387   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:38:07.949619   45790 main.go:141] libmachine: (multinode-396884) Calling .GetState
	I0914 17:38:07.951620   45790 fix.go:112] recreateIfNeeded on multinode-396884: state=Running err=<nil>
	W0914 17:38:07.951638   45790 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 17:38:07.953791   45790 out.go:177] * Updating the running kvm2 "multinode-396884" VM ...
	I0914 17:38:07.955308   45790 machine.go:93] provisionDockerMachine start ...
	I0914 17:38:07.955338   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:38:07.955640   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:07.958963   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:07.959588   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:07.959615   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:07.959818   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:07.959991   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:07.960157   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:07.960292   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:07.960441   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:07.960645   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:07.960655   45790 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 17:38:08.075396   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-396884
	
	I0914 17:38:08.075496   45790 main.go:141] libmachine: (multinode-396884) Calling .GetMachineName
	I0914 17:38:08.075749   45790 buildroot.go:166] provisioning hostname "multinode-396884"
	I0914 17:38:08.075772   45790 main.go:141] libmachine: (multinode-396884) Calling .GetMachineName
	I0914 17:38:08.075986   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.078608   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.079064   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.079082   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.079272   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.079431   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.079560   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.079692   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.079915   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:08.080093   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:08.080106   45790 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-396884 && echo "multinode-396884" | sudo tee /etc/hostname
	I0914 17:38:08.212819   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-396884
	
	I0914 17:38:08.212843   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.215872   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.216273   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.216302   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.216521   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.216728   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.216916   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.217067   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.217284   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:08.217454   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:08.217470   45790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-396884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-396884/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-396884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:38:08.330993   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:38:08.331027   45790 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:38:08.331045   45790 buildroot.go:174] setting up certificates
	I0914 17:38:08.331053   45790 provision.go:84] configureAuth start
	I0914 17:38:08.331077   45790 main.go:141] libmachine: (multinode-396884) Calling .GetMachineName
	I0914 17:38:08.331366   45790 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:38:08.334046   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.334543   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.334573   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.334744   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.337137   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.337493   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.337525   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.337680   45790 provision.go:143] copyHostCerts
	I0914 17:38:08.337703   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:38:08.337739   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:38:08.337749   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:38:08.337814   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:38:08.337899   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:38:08.337917   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:38:08.337921   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:38:08.337952   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:38:08.338005   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:38:08.338020   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:38:08.338025   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:38:08.338045   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:38:08.338103   45790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.multinode-396884 san=[127.0.0.1 192.168.39.202 localhost minikube multinode-396884]
	I0914 17:38:08.406730   45790 provision.go:177] copyRemoteCerts
	I0914 17:38:08.406788   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:38:08.406809   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.409289   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.409633   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.409666   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.409817   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.409974   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.410120   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.410248   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:38:08.496346   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:38:08.496409   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:38:08.524163   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:38:08.524241   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 17:38:08.547276   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:38:08.547362   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:38:08.570239   45790 provision.go:87] duration metric: took 239.172443ms to configureAuth
	I0914 17:38:08.570272   45790 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:38:08.570514   45790 config.go:182] Loaded profile config "multinode-396884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:38:08.570601   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.572979   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.573288   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.573310   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.573555   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.573728   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.573888   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.574014   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.574207   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:08.574372   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:08.574386   45790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:39:39.376907   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:39:39.376935   45790 machine.go:96] duration metric: took 1m31.421606261s to provisionDockerMachine
	I0914 17:39:39.376951   45790 start.go:293] postStartSetup for "multinode-396884" (driver="kvm2")
	I0914 17:39:39.376964   45790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:39:39.376978   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.377238   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:39:39.377263   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.380978   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.381373   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.381391   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.381711   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.381920   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.382090   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.382242   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:39:39.469587   45790 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:39:39.473918   45790 command_runner.go:130] > NAME=Buildroot
	I0914 17:39:39.473945   45790 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0914 17:39:39.473952   45790 command_runner.go:130] > ID=buildroot
	I0914 17:39:39.473960   45790 command_runner.go:130] > VERSION_ID=2023.02.9
	I0914 17:39:39.473968   45790 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0914 17:39:39.474000   45790 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:39:39.474020   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:39:39.474104   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:39:39.474223   45790 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:39:39.474234   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:39:39.474325   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:39:39.483641   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:39:39.506857   45790 start.go:296] duration metric: took 129.893198ms for postStartSetup
	I0914 17:39:39.506901   45790 fix.go:56] duration metric: took 1m31.574538507s for fixHost
	I0914 17:39:39.506922   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.509726   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.510104   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.510136   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.510331   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.510516   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.510647   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.510745   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.510873   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:39:39.511027   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:39:39.511037   45790 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:39:39.622796   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726335579.589914886
	
	I0914 17:39:39.622820   45790 fix.go:216] guest clock: 1726335579.589914886
	I0914 17:39:39.622835   45790 fix.go:229] Guest: 2024-09-14 17:39:39.589914886 +0000 UTC Remote: 2024-09-14 17:39:39.506905311 +0000 UTC m=+91.705736536 (delta=83.009575ms)
	I0914 17:39:39.622858   45790 fix.go:200] guest clock delta is within tolerance: 83.009575ms
	I0914 17:39:39.622863   45790 start.go:83] releasing machines lock for "multinode-396884", held for 1m31.690518254s
	I0914 17:39:39.622884   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.623103   45790 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:39:39.625950   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.626329   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.626354   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.626543   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.626965   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.627134   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.627254   45790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:39:39.627302   45790 ssh_runner.go:195] Run: cat /version.json
	I0914 17:39:39.627325   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.627306   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.630009   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630136   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630524   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.630552   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630577   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.630593   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630709   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.630869   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.630888   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.631041   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.631059   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.631187   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.631317   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:39:39.631328   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:39:39.719974   45790 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726243947-19640", "minikube_version": "v1.34.0", "commit": "e811e8872a58983cadac51ebe65d77fb02f32a08"}
	I0914 17:39:39.752106   45790 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 17:39:39.752907   45790 ssh_runner.go:195] Run: systemctl --version
	I0914 17:39:39.758907   45790 command_runner.go:130] > systemd 252 (252)
	I0914 17:39:39.758940   45790 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0914 17:39:39.759037   45790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:39:39.915840   45790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 17:39:39.925133   45790 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 17:39:39.925169   45790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:39:39.925224   45790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:39:39.934706   45790 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 17:39:39.934728   45790 start.go:495] detecting cgroup driver to use...
	I0914 17:39:39.934797   45790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:39:39.953064   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:39:39.967741   45790 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:39:39.967798   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:39:39.982169   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:39:39.996132   45790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:39:40.147051   45790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:39:40.307874   45790 docker.go:233] disabling docker service ...
	I0914 17:39:40.307950   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:39:40.327589   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:39:40.341745   45790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:39:40.493489   45790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:39:40.647227   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:39:40.662532   45790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:39:40.681643   45790 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 17:39:40.681703   45790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:39:40.681748   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.692618   45790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:39:40.692685   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.703395   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.713647   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.725550   45790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:39:40.738012   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.748792   45790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.759254   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.769560   45790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:39:40.779762   45790 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 17:39:40.779827   45790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:39:40.789791   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:39:40.932537   45790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:39:41.151605   45790 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:39:41.151685   45790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:39:41.157394   45790 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 17:39:41.157435   45790 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 17:39:41.157445   45790 command_runner.go:130] > Device: 0,22	Inode: 1306        Links: 1
	I0914 17:39:41.157456   45790 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 17:39:41.157464   45790 command_runner.go:130] > Access: 2024-09-14 17:39:41.039445574 +0000
	I0914 17:39:41.157472   45790 command_runner.go:130] > Modify: 2024-09-14 17:39:40.990444441 +0000
	I0914 17:39:41.157480   45790 command_runner.go:130] > Change: 2024-09-14 17:39:40.990444441 +0000
	I0914 17:39:41.157487   45790 command_runner.go:130] >  Birth: -
	I0914 17:39:41.157523   45790 start.go:563] Will wait 60s for crictl version
	I0914 17:39:41.157583   45790 ssh_runner.go:195] Run: which crictl
	I0914 17:39:41.161496   45790 command_runner.go:130] > /usr/bin/crictl
	I0914 17:39:41.161557   45790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:39:41.201332   45790 command_runner.go:130] > Version:  0.1.0
	I0914 17:39:41.201363   45790 command_runner.go:130] > RuntimeName:  cri-o
	I0914 17:39:41.201371   45790 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0914 17:39:41.201461   45790 command_runner.go:130] > RuntimeApiVersion:  v1
	I0914 17:39:41.202795   45790 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:39:41.202873   45790 ssh_runner.go:195] Run: crio --version
	I0914 17:39:41.236356   45790 command_runner.go:130] > crio version 1.29.1
	I0914 17:39:41.236380   45790 command_runner.go:130] > Version:        1.29.1
	I0914 17:39:41.236394   45790 command_runner.go:130] > GitCommit:      unknown
	I0914 17:39:41.236403   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0914 17:39:41.236409   45790 command_runner.go:130] > GitTreeState:   clean
	I0914 17:39:41.236429   45790 command_runner.go:130] > BuildDate:      2024-09-14T08:18:37Z
	I0914 17:39:41.236434   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 17:39:41.236438   45790 command_runner.go:130] > Compiler:       gc
	I0914 17:39:41.236448   45790 command_runner.go:130] > Platform:       linux/amd64
	I0914 17:39:41.236453   45790 command_runner.go:130] > Linkmode:       dynamic
	I0914 17:39:41.236467   45790 command_runner.go:130] > BuildTags:      
	I0914 17:39:41.236475   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0914 17:39:41.236479   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 17:39:41.236484   45790 command_runner.go:130] >   btrfs_noversion
	I0914 17:39:41.236488   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 17:39:41.236493   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 17:39:41.236496   45790 command_runner.go:130] >   seccomp
	I0914 17:39:41.236503   45790 command_runner.go:130] > LDFlags:          unknown
	I0914 17:39:41.236507   45790 command_runner.go:130] > SeccompEnabled:   true
	I0914 17:39:41.236511   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0914 17:39:41.236579   45790 ssh_runner.go:195] Run: crio --version
	I0914 17:39:41.263512   45790 command_runner.go:130] > crio version 1.29.1
	I0914 17:39:41.263533   45790 command_runner.go:130] > Version:        1.29.1
	I0914 17:39:41.263538   45790 command_runner.go:130] > GitCommit:      unknown
	I0914 17:39:41.263543   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0914 17:39:41.263547   45790 command_runner.go:130] > GitTreeState:   clean
	I0914 17:39:41.263552   45790 command_runner.go:130] > BuildDate:      2024-09-14T08:18:37Z
	I0914 17:39:41.263556   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 17:39:41.263560   45790 command_runner.go:130] > Compiler:       gc
	I0914 17:39:41.263573   45790 command_runner.go:130] > Platform:       linux/amd64
	I0914 17:39:41.263577   45790 command_runner.go:130] > Linkmode:       dynamic
	I0914 17:39:41.263592   45790 command_runner.go:130] > BuildTags:      
	I0914 17:39:41.263596   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0914 17:39:41.263601   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 17:39:41.263606   45790 command_runner.go:130] >   btrfs_noversion
	I0914 17:39:41.263611   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 17:39:41.263617   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 17:39:41.263621   45790 command_runner.go:130] >   seccomp
	I0914 17:39:41.263625   45790 command_runner.go:130] > LDFlags:          unknown
	I0914 17:39:41.263641   45790 command_runner.go:130] > SeccompEnabled:   true
	I0914 17:39:41.263648   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0914 17:39:41.266908   45790 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:39:41.268257   45790 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:39:41.270873   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:41.271243   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:41.271268   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:41.271565   45790 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:39:41.275800   45790 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0914 17:39:41.275936   45790 kubeadm.go:883] updating cluster {Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:39:41.276082   45790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:39:41.276126   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:39:41.316246   45790 command_runner.go:130] > {
	I0914 17:39:41.316271   45790 command_runner.go:130] >   "images": [
	I0914 17:39:41.316277   45790 command_runner.go:130] >     {
	I0914 17:39:41.316307   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 17:39:41.316314   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316322   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 17:39:41.316328   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316333   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316344   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 17:39:41.316354   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 17:39:41.316371   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316378   45790 command_runner.go:130] >       "size": "87190579",
	I0914 17:39:41.316382   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316388   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316393   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316399   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316403   45790 command_runner.go:130] >     },
	I0914 17:39:41.316408   45790 command_runner.go:130] >     {
	I0914 17:39:41.316413   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 17:39:41.316420   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316433   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 17:39:41.316442   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316448   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316463   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 17:39:41.316475   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 17:39:41.316482   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316486   45790 command_runner.go:130] >       "size": "1363676",
	I0914 17:39:41.316492   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316498   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316504   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316508   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316513   45790 command_runner.go:130] >     },
	I0914 17:39:41.316517   45790 command_runner.go:130] >     {
	I0914 17:39:41.316525   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 17:39:41.316533   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316544   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 17:39:41.316552   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316562   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316574   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 17:39:41.316585   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 17:39:41.316591   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316596   45790 command_runner.go:130] >       "size": "31470524",
	I0914 17:39:41.316602   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316606   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316611   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316616   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316623   45790 command_runner.go:130] >     },
	I0914 17:39:41.316632   45790 command_runner.go:130] >     {
	I0914 17:39:41.316645   45790 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 17:39:41.316655   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316666   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 17:39:41.316675   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316683   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316696   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 17:39:41.316714   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 17:39:41.316722   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316728   45790 command_runner.go:130] >       "size": "63273227",
	I0914 17:39:41.316736   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316741   45790 command_runner.go:130] >       "username": "nonroot",
	I0914 17:39:41.316750   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316756   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316763   45790 command_runner.go:130] >     },
	I0914 17:39:41.316768   45790 command_runner.go:130] >     {
	I0914 17:39:41.316780   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 17:39:41.316789   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316796   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 17:39:41.316804   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316810   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316823   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 17:39:41.316836   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 17:39:41.316842   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316851   45790 command_runner.go:130] >       "size": "149009664",
	I0914 17:39:41.316857   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.316866   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.316873   45790 command_runner.go:130] >       },
	I0914 17:39:41.316883   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316892   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316901   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316909   45790 command_runner.go:130] >     },
	I0914 17:39:41.316914   45790 command_runner.go:130] >     {
	I0914 17:39:41.316923   45790 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 17:39:41.316927   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316932   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 17:39:41.316937   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316941   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316951   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 17:39:41.316970   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 17:39:41.316979   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316984   45790 command_runner.go:130] >       "size": "95237600",
	I0914 17:39:41.316990   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.316999   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.317005   45790 command_runner.go:130] >       },
	I0914 17:39:41.317014   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317020   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317026   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317034   45790 command_runner.go:130] >     },
	I0914 17:39:41.317040   45790 command_runner.go:130] >     {
	I0914 17:39:41.317052   45790 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 17:39:41.317065   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317074   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 17:39:41.317078   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317082   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317092   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 17:39:41.317100   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 17:39:41.317106   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317110   45790 command_runner.go:130] >       "size": "89437508",
	I0914 17:39:41.317113   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.317117   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.317121   45790 command_runner.go:130] >       },
	I0914 17:39:41.317125   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317129   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317133   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317136   45790 command_runner.go:130] >     },
	I0914 17:39:41.317139   45790 command_runner.go:130] >     {
	I0914 17:39:41.317145   45790 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 17:39:41.317151   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317156   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 17:39:41.317159   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317163   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317189   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 17:39:41.317199   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 17:39:41.317202   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317207   45790 command_runner.go:130] >       "size": "92733849",
	I0914 17:39:41.317211   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.317214   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317218   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317222   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317225   45790 command_runner.go:130] >     },
	I0914 17:39:41.317227   45790 command_runner.go:130] >     {
	I0914 17:39:41.317233   45790 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 17:39:41.317237   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317241   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 17:39:41.317245   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317248   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317258   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 17:39:41.317268   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 17:39:41.317274   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317280   45790 command_runner.go:130] >       "size": "68420934",
	I0914 17:39:41.317291   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.317295   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.317298   45790 command_runner.go:130] >       },
	I0914 17:39:41.317301   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317305   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317308   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317311   45790 command_runner.go:130] >     },
	I0914 17:39:41.317315   45790 command_runner.go:130] >     {
	I0914 17:39:41.317320   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 17:39:41.317326   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317332   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 17:39:41.317337   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317343   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317353   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 17:39:41.317370   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 17:39:41.317378   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317382   45790 command_runner.go:130] >       "size": "742080",
	I0914 17:39:41.317386   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.317390   45790 command_runner.go:130] >         "value": "65535"
	I0914 17:39:41.317393   45790 command_runner.go:130] >       },
	I0914 17:39:41.317397   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317401   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317404   45790 command_runner.go:130] >       "pinned": true
	I0914 17:39:41.317408   45790 command_runner.go:130] >     }
	I0914 17:39:41.317411   45790 command_runner.go:130] >   ]
	I0914 17:39:41.317414   45790 command_runner.go:130] > }
	I0914 17:39:41.317649   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:39:41.317667   45790 crio.go:433] Images already preloaded, skipping extraction
	I0914 17:39:41.317728   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:39:41.355211   45790 command_runner.go:130] > {
	I0914 17:39:41.355232   45790 command_runner.go:130] >   "images": [
	I0914 17:39:41.355238   45790 command_runner.go:130] >     {
	I0914 17:39:41.355248   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 17:39:41.355255   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355263   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 17:39:41.355268   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355273   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355285   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 17:39:41.355296   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 17:39:41.355307   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355313   45790 command_runner.go:130] >       "size": "87190579",
	I0914 17:39:41.355319   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355324   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355337   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355348   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355355   45790 command_runner.go:130] >     },
	I0914 17:39:41.355361   45790 command_runner.go:130] >     {
	I0914 17:39:41.355379   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 17:39:41.355388   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355397   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 17:39:41.355404   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355411   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355424   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 17:39:41.355447   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 17:39:41.355456   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355463   45790 command_runner.go:130] >       "size": "1363676",
	I0914 17:39:41.355469   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355484   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355493   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355499   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355506   45790 command_runner.go:130] >     },
	I0914 17:39:41.355512   45790 command_runner.go:130] >     {
	I0914 17:39:41.355523   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 17:39:41.355531   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355540   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 17:39:41.355549   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355557   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355582   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 17:39:41.355593   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 17:39:41.355599   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355607   45790 command_runner.go:130] >       "size": "31470524",
	I0914 17:39:41.355617   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355626   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355635   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355643   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355651   45790 command_runner.go:130] >     },
	I0914 17:39:41.355657   45790 command_runner.go:130] >     {
	I0914 17:39:41.355671   45790 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 17:39:41.355680   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355688   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 17:39:41.355703   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355713   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355729   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 17:39:41.355755   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 17:39:41.355763   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355771   45790 command_runner.go:130] >       "size": "63273227",
	I0914 17:39:41.355781   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355794   45790 command_runner.go:130] >       "username": "nonroot",
	I0914 17:39:41.355803   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355813   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355821   45790 command_runner.go:130] >     },
	I0914 17:39:41.355827   45790 command_runner.go:130] >     {
	I0914 17:39:41.355839   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 17:39:41.355848   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355855   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 17:39:41.355864   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355872   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355893   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 17:39:41.355908   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 17:39:41.355915   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355924   45790 command_runner.go:130] >       "size": "149009664",
	I0914 17:39:41.355932   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.355939   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.355946   45790 command_runner.go:130] >       },
	I0914 17:39:41.355954   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355962   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355969   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355978   45790 command_runner.go:130] >     },
	I0914 17:39:41.355983   45790 command_runner.go:130] >     {
	I0914 17:39:41.355996   45790 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 17:39:41.356006   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356022   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 17:39:41.356030   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356043   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356058   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 17:39:41.356073   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 17:39:41.356081   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356088   45790 command_runner.go:130] >       "size": "95237600",
	I0914 17:39:41.356097   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356105   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.356113   45790 command_runner.go:130] >       },
	I0914 17:39:41.356120   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356129   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356138   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356145   45790 command_runner.go:130] >     },
	I0914 17:39:41.356152   45790 command_runner.go:130] >     {
	I0914 17:39:41.356164   45790 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 17:39:41.356173   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356197   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 17:39:41.356205   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356213   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356229   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 17:39:41.356244   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 17:39:41.356255   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356264   45790 command_runner.go:130] >       "size": "89437508",
	I0914 17:39:41.356272   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356279   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.356287   45790 command_runner.go:130] >       },
	I0914 17:39:41.356294   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356303   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356310   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356316   45790 command_runner.go:130] >     },
	I0914 17:39:41.356324   45790 command_runner.go:130] >     {
	I0914 17:39:41.356334   45790 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 17:39:41.356343   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356352   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 17:39:41.356366   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356376   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356405   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 17:39:41.356419   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 17:39:41.356425   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356433   45790 command_runner.go:130] >       "size": "92733849",
	I0914 17:39:41.356443   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.356451   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356459   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356469   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356477   45790 command_runner.go:130] >     },
	I0914 17:39:41.356485   45790 command_runner.go:130] >     {
	I0914 17:39:41.356496   45790 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 17:39:41.356504   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356514   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 17:39:41.356523   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356530   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356545   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 17:39:41.356562   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 17:39:41.356576   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356583   45790 command_runner.go:130] >       "size": "68420934",
	I0914 17:39:41.356592   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356599   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.356606   45790 command_runner.go:130] >       },
	I0914 17:39:41.356614   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356622   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356629   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356635   45790 command_runner.go:130] >     },
	I0914 17:39:41.356643   45790 command_runner.go:130] >     {
	I0914 17:39:41.356657   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 17:39:41.356666   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356674   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 17:39:41.356682   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356698   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356712   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 17:39:41.356730   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 17:39:41.356738   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356745   45790 command_runner.go:130] >       "size": "742080",
	I0914 17:39:41.356753   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356761   45790 command_runner.go:130] >         "value": "65535"
	I0914 17:39:41.356769   45790 command_runner.go:130] >       },
	I0914 17:39:41.356777   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356786   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356793   45790 command_runner.go:130] >       "pinned": true
	I0914 17:39:41.356801   45790 command_runner.go:130] >     }
	I0914 17:39:41.356807   45790 command_runner.go:130] >   ]
	I0914 17:39:41.356814   45790 command_runner.go:130] > }
	I0914 17:39:41.356953   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:39:41.356965   45790 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:39:41.356973   45790 kubeadm.go:934] updating node { 192.168.39.202 8443 v1.31.1 crio true true} ...
	I0914 17:39:41.357103   45790 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-396884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:39:41.357181   45790 ssh_runner.go:195] Run: crio config
	I0914 17:39:41.394664   45790 command_runner.go:130] ! time="2024-09-14 17:39:41.361792585Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0914 17:39:41.399863   45790 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 17:39:41.407712   45790 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 17:39:41.407735   45790 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 17:39:41.407744   45790 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 17:39:41.407749   45790 command_runner.go:130] > #
	I0914 17:39:41.407758   45790 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 17:39:41.407767   45790 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 17:39:41.407775   45790 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 17:39:41.407790   45790 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 17:39:41.407797   45790 command_runner.go:130] > # reload'.
	I0914 17:39:41.407810   45790 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 17:39:41.407821   45790 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 17:39:41.407833   45790 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 17:39:41.407846   45790 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 17:39:41.407856   45790 command_runner.go:130] > [crio]
	I0914 17:39:41.407867   45790 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 17:39:41.407876   45790 command_runner.go:130] > # containers images, in this directory.
	I0914 17:39:41.407884   45790 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 17:39:41.407898   45790 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 17:39:41.407907   45790 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 17:39:41.407920   45790 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0914 17:39:41.407930   45790 command_runner.go:130] > # imagestore = ""
	I0914 17:39:41.407941   45790 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 17:39:41.407954   45790 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 17:39:41.407964   45790 command_runner.go:130] > storage_driver = "overlay"
	I0914 17:39:41.407974   45790 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 17:39:41.407986   45790 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 17:39:41.408001   45790 command_runner.go:130] > storage_option = [
	I0914 17:39:41.408012   45790 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 17:39:41.408018   45790 command_runner.go:130] > ]
	I0914 17:39:41.408029   45790 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 17:39:41.408042   45790 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 17:39:41.408052   45790 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 17:39:41.408064   45790 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 17:39:41.408076   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 17:39:41.408084   45790 command_runner.go:130] > # always happen on a node reboot
	I0914 17:39:41.408095   45790 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 17:39:41.408116   45790 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 17:39:41.408128   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 17:39:41.408140   45790 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 17:39:41.408151   45790 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0914 17:39:41.408165   45790 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 17:39:41.408184   45790 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 17:39:41.408193   45790 command_runner.go:130] > # internal_wipe = true
	I0914 17:39:41.408207   45790 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0914 17:39:41.408219   45790 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0914 17:39:41.408229   45790 command_runner.go:130] > # internal_repair = false
	I0914 17:39:41.408240   45790 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 17:39:41.408253   45790 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 17:39:41.408264   45790 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 17:39:41.408275   45790 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 17:39:41.408289   45790 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 17:39:41.408297   45790 command_runner.go:130] > [crio.api]
	I0914 17:39:41.408307   45790 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 17:39:41.408316   45790 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 17:39:41.408323   45790 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 17:39:41.408329   45790 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 17:39:41.408339   45790 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 17:39:41.408349   45790 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 17:39:41.408358   45790 command_runner.go:130] > # stream_port = "0"
	I0914 17:39:41.408376   45790 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 17:39:41.408385   45790 command_runner.go:130] > # stream_enable_tls = false
	I0914 17:39:41.408395   45790 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 17:39:41.408405   45790 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 17:39:41.408417   45790 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 17:39:41.408429   45790 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 17:39:41.408437   45790 command_runner.go:130] > # minutes.
	I0914 17:39:41.408446   45790 command_runner.go:130] > # stream_tls_cert = ""
	I0914 17:39:41.408458   45790 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 17:39:41.408470   45790 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 17:39:41.408480   45790 command_runner.go:130] > # stream_tls_key = ""
	I0914 17:39:41.408490   45790 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 17:39:41.408504   45790 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 17:39:41.408534   45790 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 17:39:41.408544   45790 command_runner.go:130] > # stream_tls_ca = ""
	I0914 17:39:41.408556   45790 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 17:39:41.408575   45790 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 17:39:41.408590   45790 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 17:39:41.408599   45790 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 17:39:41.408609   45790 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 17:39:41.408622   45790 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 17:39:41.408631   45790 command_runner.go:130] > [crio.runtime]
	I0914 17:39:41.408643   45790 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 17:39:41.408655   45790 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 17:39:41.408665   45790 command_runner.go:130] > # "nofile=1024:2048"
	I0914 17:39:41.408683   45790 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 17:39:41.408693   45790 command_runner.go:130] > # default_ulimits = [
	I0914 17:39:41.408699   45790 command_runner.go:130] > # ]
	I0914 17:39:41.408712   45790 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 17:39:41.408721   45790 command_runner.go:130] > # no_pivot = false
	I0914 17:39:41.408738   45790 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 17:39:41.408752   45790 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 17:39:41.408762   45790 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 17:39:41.408780   45790 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 17:39:41.408790   45790 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 17:39:41.408802   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 17:39:41.408813   45790 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 17:39:41.408821   45790 command_runner.go:130] > # Cgroup setting for conmon
	I0914 17:39:41.408835   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 17:39:41.408844   45790 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 17:39:41.408855   45790 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 17:39:41.408866   45790 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 17:39:41.408879   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 17:39:41.408886   45790 command_runner.go:130] > conmon_env = [
	I0914 17:39:41.408899   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 17:39:41.408906   45790 command_runner.go:130] > ]
	I0914 17:39:41.408915   45790 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 17:39:41.408926   45790 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 17:39:41.408937   45790 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 17:39:41.408946   45790 command_runner.go:130] > # default_env = [
	I0914 17:39:41.408952   45790 command_runner.go:130] > # ]
	I0914 17:39:41.408961   45790 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 17:39:41.408974   45790 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0914 17:39:41.408983   45790 command_runner.go:130] > # selinux = false
	I0914 17:39:41.408994   45790 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 17:39:41.409007   45790 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 17:39:41.409019   45790 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 17:39:41.409028   45790 command_runner.go:130] > # seccomp_profile = ""
	I0914 17:39:41.409038   45790 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 17:39:41.409049   45790 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 17:39:41.409060   45790 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 17:39:41.409070   45790 command_runner.go:130] > # which might increase security.
	I0914 17:39:41.409079   45790 command_runner.go:130] > # This option is currently deprecated,
	I0914 17:39:41.409091   45790 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0914 17:39:41.409111   45790 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 17:39:41.409124   45790 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 17:39:41.409142   45790 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 17:39:41.409158   45790 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 17:39:41.409171   45790 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 17:39:41.409183   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.409193   45790 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 17:39:41.409203   45790 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 17:39:41.409213   45790 command_runner.go:130] > # the cgroup blockio controller.
	I0914 17:39:41.409222   45790 command_runner.go:130] > # blockio_config_file = ""
	I0914 17:39:41.409233   45790 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0914 17:39:41.409243   45790 command_runner.go:130] > # blockio parameters.
	I0914 17:39:41.409251   45790 command_runner.go:130] > # blockio_reload = false
	I0914 17:39:41.409264   45790 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 17:39:41.409274   45790 command_runner.go:130] > # irqbalance daemon.
	I0914 17:39:41.409284   45790 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 17:39:41.409296   45790 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0914 17:39:41.409308   45790 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0914 17:39:41.409323   45790 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0914 17:39:41.409335   45790 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0914 17:39:41.409349   45790 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 17:39:41.409360   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.409368   45790 command_runner.go:130] > # rdt_config_file = ""
	I0914 17:39:41.409379   45790 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 17:39:41.409387   45790 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 17:39:41.409428   45790 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 17:39:41.409439   45790 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 17:39:41.409449   45790 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 17:39:41.409462   45790 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 17:39:41.409468   45790 command_runner.go:130] > # will be added.
	I0914 17:39:41.409479   45790 command_runner.go:130] > # default_capabilities = [
	I0914 17:39:41.409487   45790 command_runner.go:130] > # 	"CHOWN",
	I0914 17:39:41.409494   45790 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 17:39:41.409501   45790 command_runner.go:130] > # 	"FSETID",
	I0914 17:39:41.409510   45790 command_runner.go:130] > # 	"FOWNER",
	I0914 17:39:41.409528   45790 command_runner.go:130] > # 	"SETGID",
	I0914 17:39:41.409537   45790 command_runner.go:130] > # 	"SETUID",
	I0914 17:39:41.409543   45790 command_runner.go:130] > # 	"SETPCAP",
	I0914 17:39:41.409550   45790 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 17:39:41.409564   45790 command_runner.go:130] > # 	"KILL",
	I0914 17:39:41.409572   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409586   45790 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0914 17:39:41.409598   45790 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0914 17:39:41.409612   45790 command_runner.go:130] > # add_inheritable_capabilities = false
	I0914 17:39:41.409626   45790 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 17:39:41.409638   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 17:39:41.409648   45790 command_runner.go:130] > default_sysctls = [
	I0914 17:39:41.409657   45790 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0914 17:39:41.409664   45790 command_runner.go:130] > ]
	I0914 17:39:41.409681   45790 command_runner.go:130] > # List of devices on the host that a
	I0914 17:39:41.409694   45790 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 17:39:41.409703   45790 command_runner.go:130] > # allowed_devices = [
	I0914 17:39:41.409710   45790 command_runner.go:130] > # 	"/dev/fuse",
	I0914 17:39:41.409718   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409728   45790 command_runner.go:130] > # List of additional devices. specified as
	I0914 17:39:41.409742   45790 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 17:39:41.409754   45790 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 17:39:41.409767   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 17:39:41.409777   45790 command_runner.go:130] > # additional_devices = [
	I0914 17:39:41.409784   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409794   45790 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 17:39:41.409804   45790 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 17:39:41.409812   45790 command_runner.go:130] > # 	"/etc/cdi",
	I0914 17:39:41.409821   45790 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 17:39:41.409827   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409840   45790 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 17:39:41.409852   45790 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 17:39:41.409862   45790 command_runner.go:130] > # Defaults to false.
	I0914 17:39:41.409877   45790 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 17:39:41.409891   45790 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 17:39:41.409903   45790 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 17:39:41.409911   45790 command_runner.go:130] > # hooks_dir = [
	I0914 17:39:41.409920   45790 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 17:39:41.409929   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409939   45790 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 17:39:41.409952   45790 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 17:39:41.409961   45790 command_runner.go:130] > # its default mounts from the following two files:
	I0914 17:39:41.409969   45790 command_runner.go:130] > #
	I0914 17:39:41.409980   45790 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 17:39:41.409993   45790 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 17:39:41.410005   45790 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 17:39:41.410010   45790 command_runner.go:130] > #
	I0914 17:39:41.410020   45790 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 17:39:41.410034   45790 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 17:39:41.410048   45790 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 17:39:41.410058   45790 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 17:39:41.410065   45790 command_runner.go:130] > #
	I0914 17:39:41.410074   45790 command_runner.go:130] > # default_mounts_file = ""
	I0914 17:39:41.410086   45790 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 17:39:41.410100   45790 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 17:39:41.410109   45790 command_runner.go:130] > pids_limit = 1024
	I0914 17:39:41.410120   45790 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 17:39:41.410132   45790 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 17:39:41.410146   45790 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 17:39:41.410171   45790 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 17:39:41.410181   45790 command_runner.go:130] > # log_size_max = -1
	I0914 17:39:41.410193   45790 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0914 17:39:41.410203   45790 command_runner.go:130] > # log_to_journald = false
	I0914 17:39:41.410215   45790 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 17:39:41.410226   45790 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 17:39:41.410235   45790 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 17:39:41.410252   45790 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 17:39:41.410264   45790 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 17:39:41.410274   45790 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 17:39:41.410286   45790 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 17:39:41.410296   45790 command_runner.go:130] > # read_only = false
	I0914 17:39:41.410308   45790 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 17:39:41.410319   45790 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 17:39:41.410326   45790 command_runner.go:130] > # live configuration reload.
	I0914 17:39:41.410335   45790 command_runner.go:130] > # log_level = "info"
	I0914 17:39:41.410345   45790 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 17:39:41.410356   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.410365   45790 command_runner.go:130] > # log_filter = ""
	I0914 17:39:41.410377   45790 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 17:39:41.410392   45790 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 17:39:41.410401   45790 command_runner.go:130] > # separated by comma.
	I0914 17:39:41.410415   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410424   45790 command_runner.go:130] > # uid_mappings = ""
	I0914 17:39:41.410433   45790 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 17:39:41.410454   45790 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 17:39:41.410464   45790 command_runner.go:130] > # separated by comma.
	I0914 17:39:41.410477   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410489   45790 command_runner.go:130] > # gid_mappings = ""
	I0914 17:39:41.410502   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 17:39:41.410514   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 17:39:41.410524   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 17:39:41.410540   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410549   45790 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 17:39:41.410563   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 17:39:41.410576   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 17:39:41.410592   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 17:39:41.410607   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410614   45790 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 17:39:41.410628   45790 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 17:39:41.410647   45790 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 17:39:41.410659   45790 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 17:39:41.410668   45790 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 17:39:41.410678   45790 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 17:39:41.410690   45790 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 17:39:41.410699   45790 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 17:39:41.410710   45790 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 17:39:41.410718   45790 command_runner.go:130] > drop_infra_ctr = false
	I0914 17:39:41.410729   45790 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 17:39:41.410740   45790 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 17:39:41.410755   45790 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 17:39:41.410765   45790 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 17:39:41.410778   45790 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0914 17:39:41.410788   45790 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0914 17:39:41.410801   45790 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0914 17:39:41.410812   45790 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0914 17:39:41.410822   45790 command_runner.go:130] > # shared_cpuset = ""
	I0914 17:39:41.410835   45790 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 17:39:41.410846   45790 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 17:39:41.410857   45790 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 17:39:41.410872   45790 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 17:39:41.410882   45790 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 17:39:41.410893   45790 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0914 17:39:41.410909   45790 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0914 17:39:41.410919   45790 command_runner.go:130] > # enable_criu_support = false
	I0914 17:39:41.410930   45790 command_runner.go:130] > # Enable/disable the generation of the container,
	I0914 17:39:41.410943   45790 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0914 17:39:41.410953   45790 command_runner.go:130] > # enable_pod_events = false
	I0914 17:39:41.410966   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 17:39:41.410980   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 17:39:41.410991   45790 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0914 17:39:41.411000   45790 command_runner.go:130] > # default_runtime = "runc"
	I0914 17:39:41.411010   45790 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 17:39:41.411035   45790 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 17:39:41.411053   45790 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0914 17:39:41.411064   45790 command_runner.go:130] > # creation as a file is not desired either.
	I0914 17:39:41.411080   45790 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 17:39:41.411091   45790 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 17:39:41.411100   45790 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 17:39:41.411108   45790 command_runner.go:130] > # ]
	I0914 17:39:41.411118   45790 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 17:39:41.411131   45790 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 17:39:41.411144   45790 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0914 17:39:41.411162   45790 command_runner.go:130] > # Each entry in the table should follow the format:
	I0914 17:39:41.411171   45790 command_runner.go:130] > #
	I0914 17:39:41.411181   45790 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0914 17:39:41.411192   45790 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0914 17:39:41.411249   45790 command_runner.go:130] > # runtime_type = "oci"
	I0914 17:39:41.411260   45790 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0914 17:39:41.411269   45790 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0914 17:39:41.411280   45790 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0914 17:39:41.411287   45790 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0914 17:39:41.411294   45790 command_runner.go:130] > # monitor_env = []
	I0914 17:39:41.411305   45790 command_runner.go:130] > # privileged_without_host_devices = false
	I0914 17:39:41.411312   45790 command_runner.go:130] > # allowed_annotations = []
	I0914 17:39:41.411321   45790 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0914 17:39:41.411329   45790 command_runner.go:130] > # Where:
	I0914 17:39:41.411339   45790 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0914 17:39:41.411352   45790 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0914 17:39:41.411364   45790 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 17:39:41.411380   45790 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 17:39:41.411389   45790 command_runner.go:130] > #   in $PATH.
	I0914 17:39:41.411401   45790 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0914 17:39:41.411412   45790 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 17:39:41.411425   45790 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0914 17:39:41.411434   45790 command_runner.go:130] > #   state.
	I0914 17:39:41.411451   45790 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 17:39:41.411463   45790 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 17:39:41.411473   45790 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 17:39:41.411485   45790 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 17:39:41.411498   45790 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 17:39:41.411511   45790 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 17:39:41.411522   45790 command_runner.go:130] > #   The currently recognized values are:
	I0914 17:39:41.411533   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 17:39:41.411548   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 17:39:41.411565   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 17:39:41.411578   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 17:39:41.411593   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 17:39:41.411607   45790 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 17:39:41.411621   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0914 17:39:41.411635   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0914 17:39:41.411646   45790 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 17:39:41.411660   45790 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0914 17:39:41.411671   45790 command_runner.go:130] > #   deprecated option "conmon".
	I0914 17:39:41.411683   45790 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0914 17:39:41.411695   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0914 17:39:41.411709   45790 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0914 17:39:41.411718   45790 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 17:39:41.411730   45790 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0914 17:39:41.411741   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0914 17:39:41.411755   45790 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0914 17:39:41.411766   45790 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0914 17:39:41.411774   45790 command_runner.go:130] > #
	I0914 17:39:41.411782   45790 command_runner.go:130] > # Using the seccomp notifier feature:
	I0914 17:39:41.411794   45790 command_runner.go:130] > #
	I0914 17:39:41.411806   45790 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0914 17:39:41.411818   45790 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0914 17:39:41.411825   45790 command_runner.go:130] > #
	I0914 17:39:41.411836   45790 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0914 17:39:41.411856   45790 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0914 17:39:41.411864   45790 command_runner.go:130] > #
	I0914 17:39:41.411874   45790 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0914 17:39:41.411882   45790 command_runner.go:130] > # feature.
	I0914 17:39:41.411888   45790 command_runner.go:130] > #
	I0914 17:39:41.411900   45790 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0914 17:39:41.411913   45790 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0914 17:39:41.411927   45790 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0914 17:39:41.411940   45790 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0914 17:39:41.411953   45790 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0914 17:39:41.411961   45790 command_runner.go:130] > #
	I0914 17:39:41.411971   45790 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0914 17:39:41.411984   45790 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0914 17:39:41.411991   45790 command_runner.go:130] > #
	I0914 17:39:41.412002   45790 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0914 17:39:41.412014   45790 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0914 17:39:41.412021   45790 command_runner.go:130] > #
	I0914 17:39:41.412032   45790 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0914 17:39:41.412044   45790 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0914 17:39:41.412053   45790 command_runner.go:130] > # limitation.
	I0914 17:39:41.412062   45790 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 17:39:41.412073   45790 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 17:39:41.412082   45790 command_runner.go:130] > runtime_type = "oci"
	I0914 17:39:41.412090   45790 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 17:39:41.412099   45790 command_runner.go:130] > runtime_config_path = ""
	I0914 17:39:41.412109   45790 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0914 17:39:41.412119   45790 command_runner.go:130] > monitor_cgroup = "pod"
	I0914 17:39:41.412129   45790 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 17:39:41.412136   45790 command_runner.go:130] > monitor_env = [
	I0914 17:39:41.412149   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 17:39:41.412157   45790 command_runner.go:130] > ]
	I0914 17:39:41.412166   45790 command_runner.go:130] > privileged_without_host_devices = false
	I0914 17:39:41.412179   45790 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 17:39:41.412197   45790 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 17:39:41.412211   45790 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 17:39:41.412226   45790 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 17:39:41.412244   45790 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 17:39:41.412256   45790 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 17:39:41.412274   45790 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 17:39:41.412289   45790 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 17:39:41.412302   45790 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 17:39:41.412315   45790 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 17:39:41.412322   45790 command_runner.go:130] > # Example:
	I0914 17:39:41.412333   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 17:39:41.412342   45790 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 17:39:41.412353   45790 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 17:39:41.412363   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 17:39:41.412370   45790 command_runner.go:130] > # cpuset = 0
	I0914 17:39:41.412378   45790 command_runner.go:130] > # cpushares = "0-1"
	I0914 17:39:41.412386   45790 command_runner.go:130] > # Where:
	I0914 17:39:41.412394   45790 command_runner.go:130] > # The workload name is workload-type.
	I0914 17:39:41.412408   45790 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 17:39:41.412418   45790 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 17:39:41.412430   45790 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 17:39:41.412446   45790 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 17:39:41.412459   45790 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 17:39:41.412470   45790 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0914 17:39:41.412484   45790 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0914 17:39:41.412493   45790 command_runner.go:130] > # Default value is set to true
	I0914 17:39:41.412501   45790 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0914 17:39:41.412513   45790 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0914 17:39:41.412524   45790 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0914 17:39:41.412535   45790 command_runner.go:130] > # Default value is set to 'false'
	I0914 17:39:41.412546   45790 command_runner.go:130] > # disable_hostport_mapping = false
	I0914 17:39:41.412563   45790 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 17:39:41.412571   45790 command_runner.go:130] > #
	I0914 17:39:41.412585   45790 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 17:39:41.412593   45790 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 17:39:41.412608   45790 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 17:39:41.412616   45790 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 17:39:41.412628   45790 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 17:39:41.412635   45790 command_runner.go:130] > [crio.image]
	I0914 17:39:41.412649   45790 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 17:39:41.412657   45790 command_runner.go:130] > # default_transport = "docker://"
	I0914 17:39:41.412666   45790 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 17:39:41.412676   45790 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 17:39:41.412683   45790 command_runner.go:130] > # global_auth_file = ""
	I0914 17:39:41.412690   45790 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 17:39:41.412699   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.412706   45790 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0914 17:39:41.412718   45790 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 17:39:41.412727   45790 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 17:39:41.412736   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.412742   45790 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 17:39:41.412751   45790 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 17:39:41.412760   45790 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 17:39:41.412770   45790 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 17:39:41.412779   45790 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 17:39:41.412786   45790 command_runner.go:130] > # pause_command = "/pause"
	I0914 17:39:41.412795   45790 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0914 17:39:41.412805   45790 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0914 17:39:41.412814   45790 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0914 17:39:41.412832   45790 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0914 17:39:41.412845   45790 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0914 17:39:41.412858   45790 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0914 17:39:41.412868   45790 command_runner.go:130] > # pinned_images = [
	I0914 17:39:41.412876   45790 command_runner.go:130] > # ]
	I0914 17:39:41.412887   45790 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 17:39:41.412898   45790 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 17:39:41.412919   45790 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 17:39:41.412932   45790 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 17:39:41.412944   45790 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 17:39:41.412953   45790 command_runner.go:130] > # signature_policy = ""
	I0914 17:39:41.412965   45790 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0914 17:39:41.412977   45790 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0914 17:39:41.412990   45790 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0914 17:39:41.413006   45790 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0914 17:39:41.413018   45790 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0914 17:39:41.413030   45790 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0914 17:39:41.413042   45790 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 17:39:41.413055   45790 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 17:39:41.413062   45790 command_runner.go:130] > # changing them here.
	I0914 17:39:41.413071   45790 command_runner.go:130] > # insecure_registries = [
	I0914 17:39:41.413077   45790 command_runner.go:130] > # ]
	I0914 17:39:41.413088   45790 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 17:39:41.413099   45790 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 17:39:41.413109   45790 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 17:39:41.413118   45790 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 17:39:41.413129   45790 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 17:39:41.413141   45790 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 17:39:41.413148   45790 command_runner.go:130] > # CNI plugins.
	I0914 17:39:41.413158   45790 command_runner.go:130] > [crio.network]
	I0914 17:39:41.413169   45790 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 17:39:41.413181   45790 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 17:39:41.413192   45790 command_runner.go:130] > # cni_default_network = ""
	I0914 17:39:41.413205   45790 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 17:39:41.413215   45790 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 17:39:41.413226   45790 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 17:39:41.413234   45790 command_runner.go:130] > # plugin_dirs = [
	I0914 17:39:41.413242   45790 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 17:39:41.413248   45790 command_runner.go:130] > # ]
	I0914 17:39:41.413260   45790 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 17:39:41.413278   45790 command_runner.go:130] > [crio.metrics]
	I0914 17:39:41.413289   45790 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 17:39:41.413297   45790 command_runner.go:130] > enable_metrics = true
	I0914 17:39:41.413306   45790 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 17:39:41.413316   45790 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 17:39:41.413326   45790 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 17:39:41.413345   45790 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 17:39:41.413358   45790 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 17:39:41.413367   45790 command_runner.go:130] > # metrics_collectors = [
	I0914 17:39:41.413374   45790 command_runner.go:130] > # 	"operations",
	I0914 17:39:41.413384   45790 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 17:39:41.413392   45790 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 17:39:41.413402   45790 command_runner.go:130] > # 	"operations_errors",
	I0914 17:39:41.413412   45790 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 17:39:41.413421   45790 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 17:39:41.413429   45790 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 17:39:41.413441   45790 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 17:39:41.413450   45790 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 17:39:41.413458   45790 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 17:39:41.413466   45790 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 17:39:41.413474   45790 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0914 17:39:41.413484   45790 command_runner.go:130] > # 	"containers_oom_total",
	I0914 17:39:41.413492   45790 command_runner.go:130] > # 	"containers_oom",
	I0914 17:39:41.413501   45790 command_runner.go:130] > # 	"processes_defunct",
	I0914 17:39:41.413520   45790 command_runner.go:130] > # 	"operations_total",
	I0914 17:39:41.413531   45790 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 17:39:41.413541   45790 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 17:39:41.413549   45790 command_runner.go:130] > # 	"operations_errors_total",
	I0914 17:39:41.413563   45790 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 17:39:41.413573   45790 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 17:39:41.413581   45790 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 17:39:41.413591   45790 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 17:39:41.413601   45790 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 17:39:41.413616   45790 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 17:39:41.413627   45790 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0914 17:39:41.413638   45790 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0914 17:39:41.413644   45790 command_runner.go:130] > # ]
	I0914 17:39:41.413652   45790 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 17:39:41.413663   45790 command_runner.go:130] > # metrics_port = 9090
	I0914 17:39:41.413674   45790 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 17:39:41.413681   45790 command_runner.go:130] > # metrics_socket = ""
	I0914 17:39:41.413692   45790 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 17:39:41.413704   45790 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 17:39:41.413717   45790 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 17:39:41.413728   45790 command_runner.go:130] > # certificate on any modification event.
	I0914 17:39:41.413735   45790 command_runner.go:130] > # metrics_cert = ""
	I0914 17:39:41.413745   45790 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 17:39:41.413756   45790 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 17:39:41.413765   45790 command_runner.go:130] > # metrics_key = ""
	I0914 17:39:41.413777   45790 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 17:39:41.413785   45790 command_runner.go:130] > [crio.tracing]
	I0914 17:39:41.413794   45790 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 17:39:41.413803   45790 command_runner.go:130] > # enable_tracing = false
	I0914 17:39:41.413813   45790 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 17:39:41.413824   45790 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 17:39:41.413837   45790 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0914 17:39:41.413848   45790 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 17:39:41.413858   45790 command_runner.go:130] > # CRI-O NRI configuration.
	I0914 17:39:41.413865   45790 command_runner.go:130] > [crio.nri]
	I0914 17:39:41.413875   45790 command_runner.go:130] > # Globally enable or disable NRI.
	I0914 17:39:41.413883   45790 command_runner.go:130] > # enable_nri = false
	I0914 17:39:41.413895   45790 command_runner.go:130] > # NRI socket to listen on.
	I0914 17:39:41.413905   45790 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0914 17:39:41.413915   45790 command_runner.go:130] > # NRI plugin directory to use.
	I0914 17:39:41.413924   45790 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0914 17:39:41.413935   45790 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0914 17:39:41.413954   45790 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0914 17:39:41.413967   45790 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0914 17:39:41.413977   45790 command_runner.go:130] > # nri_disable_connections = false
	I0914 17:39:41.413984   45790 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0914 17:39:41.413996   45790 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0914 17:39:41.414005   45790 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0914 17:39:41.414015   45790 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0914 17:39:41.414027   45790 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 17:39:41.414035   45790 command_runner.go:130] > [crio.stats]
	I0914 17:39:41.414045   45790 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 17:39:41.414056   45790 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 17:39:41.414066   45790 command_runner.go:130] > # stats_collection_period = 0
	I0914 17:39:41.414202   45790 cni.go:84] Creating CNI manager for ""
	I0914 17:39:41.414220   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 17:39:41.414236   45790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:39:41.414256   45790 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-396884 NodeName:multinode-396884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:39:41.414408   45790 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-396884"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:39:41.414475   45790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:39:41.424527   45790 command_runner.go:130] > kubeadm
	I0914 17:39:41.424547   45790 command_runner.go:130] > kubectl
	I0914 17:39:41.424555   45790 command_runner.go:130] > kubelet
	I0914 17:39:41.424598   45790 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:39:41.424647   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 17:39:41.433668   45790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 17:39:41.450425   45790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:39:41.466569   45790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0914 17:39:41.483294   45790 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I0914 17:39:41.487170   45790 command_runner.go:130] > 192.168.39.202	control-plane.minikube.internal
	I0914 17:39:41.487281   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:39:41.631325   45790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:39:41.645740   45790 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884 for IP: 192.168.39.202
	I0914 17:39:41.645759   45790 certs.go:194] generating shared ca certs ...
	I0914 17:39:41.645778   45790 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:39:41.645931   45790 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:39:41.645997   45790 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:39:41.646016   45790 certs.go:256] generating profile certs ...
	I0914 17:39:41.646115   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/client.key
	I0914 17:39:41.646199   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.key.347dc4ff
	I0914 17:39:41.646259   45790 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.key
	I0914 17:39:41.646273   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:39:41.646294   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:39:41.646333   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:39:41.646352   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:39:41.646367   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:39:41.646394   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:39:41.646413   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:39:41.646429   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:39:41.646497   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:39:41.646536   45790 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:39:41.646549   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:39:41.646594   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:39:41.646627   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:39:41.646662   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:39:41.646716   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:39:41.646761   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.646780   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:39:41.646803   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.648082   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:39:41.673629   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:39:41.696244   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:39:41.719185   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:39:41.741665   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 17:39:41.764322   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:39:41.786944   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:39:41.810016   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 17:39:41.833918   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:39:41.856513   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:39:41.879583   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:39:41.903897   45790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:39:41.920358   45790 ssh_runner.go:195] Run: openssl version
	I0914 17:39:41.926145   45790 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0914 17:39:41.926266   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:39:41.936553   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.940863   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.941034   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.941090   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.946396   45790 command_runner.go:130] > 3ec20f2e
	I0914 17:39:41.946563   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:39:41.955883   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:39:41.967080   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.972110   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.972204   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.972254   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.977901   45790 command_runner.go:130] > b5213941
	I0914 17:39:41.977987   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:39:41.988167   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:39:41.999538   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.004108   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.004345   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.004406   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.009803   45790 command_runner.go:130] > 51391683
	I0914 17:39:42.009979   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:39:42.019163   45790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:39:42.023367   45790 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:39:42.023393   45790 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0914 17:39:42.023401   45790 command_runner.go:130] > Device: 253,1	Inode: 6289960     Links: 1
	I0914 17:39:42.023411   45790 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 17:39:42.023423   45790 command_runner.go:130] > Access: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023432   45790 command_runner.go:130] > Modify: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023440   45790 command_runner.go:130] > Change: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023448   45790 command_runner.go:130] >  Birth: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023518   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 17:39:42.028932   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.029135   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 17:39:42.034621   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.034693   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 17:39:42.039883   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.040179   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 17:39:42.045353   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.045530   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 17:39:42.051207   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.051274   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 17:39:42.056685   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.056847   45790 kubeadm.go:392] StartCluster: {Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:39:42.057000   45790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:39:42.057055   45790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:39:42.091160   45790 command_runner.go:130] > 7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c
	I0914 17:39:42.091191   45790 command_runner.go:130] > 7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c
	I0914 17:39:42.091201   45790 command_runner.go:130] > e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3
	I0914 17:39:42.091210   45790 command_runner.go:130] > 7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382
	I0914 17:39:42.091219   45790 command_runner.go:130] > 5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b
	I0914 17:39:42.091228   45790 command_runner.go:130] > b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df
	I0914 17:39:42.091237   45790 command_runner.go:130] > 0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f
	I0914 17:39:42.091250   45790 command_runner.go:130] > 6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6
	I0914 17:39:42.091270   45790 cri.go:89] found id: "7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c"
	I0914 17:39:42.091277   45790 cri.go:89] found id: "7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c"
	I0914 17:39:42.091280   45790 cri.go:89] found id: "e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3"
	I0914 17:39:42.091286   45790 cri.go:89] found id: "7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382"
	I0914 17:39:42.091291   45790 cri.go:89] found id: "5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b"
	I0914 17:39:42.091294   45790 cri.go:89] found id: "b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df"
	I0914 17:39:42.091297   45790 cri.go:89] found id: "0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f"
	I0914 17:39:42.091300   45790 cri.go:89] found id: "6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6"
	I0914 17:39:42.091329   45790 cri.go:89] found id: ""
	I0914 17:39:42.091383   45790 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.279531413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335688279500973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=969c4da5-2ba8-470a-992c-1fefd9e5d113 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.280027056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f20fad18-9f00-4e9f-b6e2-84e4e6494e8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.280081933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f20fad18-9f00-4e9f-b6e2-84e4e6494e8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.280471730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f20fad18-9f00-4e9f-b6e2-84e4e6494e8f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.320518099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d09bc86-601b-4058-858d-bb1c981ece41 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.320603919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d09bc86-601b-4058-858d-bb1c981ece41 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.321948737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32fa2cfb-9d57-41e2-bd37-2326d1a7ee05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.322435279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335688322405319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32fa2cfb-9d57-41e2-bd37-2326d1a7ee05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.322910476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14702a1a-a2e5-414c-beef-e7e1463bf68b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.322965133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14702a1a-a2e5-414c-beef-e7e1463bf68b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.323431353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14702a1a-a2e5-414c-beef-e7e1463bf68b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.363570238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd0d32ed-0d31-41fa-9a51-b44fdd09a897 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.363968180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd0d32ed-0d31-41fa-9a51-b44fdd09a897 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.367566349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64fc8382-7148-4a47-beea-d154d2515f65 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.367991514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335688367962168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64fc8382-7148-4a47-beea-d154d2515f65 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.368713517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5551d3e6-b966-48b2-945c-359ba57a9f61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.368782846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5551d3e6-b966-48b2-945c-359ba57a9f61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.369125056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5551d3e6-b966-48b2-945c-359ba57a9f61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.413696499Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b67f9848-5d8b-4c42-a39d-58c86426e2f5 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.413803693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b67f9848-5d8b-4c42-a39d-58c86426e2f5 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.426454580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5fc9554-be4a-44bc-826c-3fff92cbd332 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.426900045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335688426875221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5fc9554-be4a-44bc-826c-3fff92cbd332 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.427443008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b93ee0e-2fb1-4ba0-9bee-b679a07ca041 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.427504996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b93ee0e-2fb1-4ba0-9bee-b679a07ca041 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:41:28 multinode-396884 crio[2683]: time="2024-09-14 17:41:28.427879643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b93ee0e-2fb1-4ba0-9bee-b679a07ca041 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	37156bea17af9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   1244304d2327d       busybox-7dff88458-pzr7k
	76fd47ab3d7c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   44ff81b98a114       storage-provisioner
	c2fc74db4fa9b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   c608d166e9cfc       coredns-7c65d6cfc9-qtpcg
	bdfae496458ec       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   ae272afecc7dc       kube-proxy-qmlbf
	fef6d936d83c9       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   14603a5b9b940       kindnet-z4d6c
	5265cfc6ac2fc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   ab897e6a5ff60       kube-scheduler-multinode-396884
	4e51c0a262ad8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   6c45ca539be4d       kube-apiserver-multinode-396884
	ff3fe0a199c09       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   a4cd5972443f1       kube-controller-manager-multinode-396884
	65a07f4361254       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   5e42fc4c6013d       etcd-multinode-396884
	c8338e88fc1b0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   6ed76286a8696       busybox-7dff88458-pzr7k
	7b20bcea57368       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   cd1d8929e4d25       coredns-7c65d6cfc9-qtpcg
	7e78c4f8c735e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   c4e259c738185       storage-provisioner
	e2a1dfc2e08a6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   d9d680aef76b1       kindnet-z4d6c
	7b44a546c6b2a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   4576692beffea       kube-proxy-qmlbf
	5390064e87e60       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   5175a3a2c4a6c       kube-scheduler-multinode-396884
	b335b9702caa3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   4a30deaeaaf32       kube-controller-manager-multinode-396884
	0bd11dfe3a3f4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   9d3c73752580a       kube-apiserver-multinode-396884
	6ea4b28b7bae4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   a383e42333e08       etcd-multinode-396884
	
	
	==> coredns [7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c] <==
	[INFO] 10.244.0.3:49362 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001722842s
	[INFO] 10.244.0.3:54240 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091983s
	[INFO] 10.244.0.3:34820 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000194229s
	[INFO] 10.244.0.3:35562 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001093551s
	[INFO] 10.244.0.3:47979 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005873s
	[INFO] 10.244.0.3:48379 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055018s
	[INFO] 10.244.0.3:40851 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060956s
	[INFO] 10.244.1.2:52098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137307s
	[INFO] 10.244.1.2:41742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129006s
	[INFO] 10.244.1.2:58999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094199s
	[INFO] 10.244.1.2:53402 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092946s
	[INFO] 10.244.0.3:38808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100114s
	[INFO] 10.244.0.3:54241 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059415s
	[INFO] 10.244.0.3:45999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042417s
	[INFO] 10.244.0.3:48989 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041207s
	[INFO] 10.244.1.2:43578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161824s
	[INFO] 10.244.1.2:59633 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000545807s
	[INFO] 10.244.1.2:39252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144119s
	[INFO] 10.244.1.2:43373 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122601s
	[INFO] 10.244.0.3:46966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121584s
	[INFO] 10.244.0.3:59127 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094605s
	[INFO] 10.244.0.3:55764 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077945s
	[INFO] 10.244.0.3:39350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065883s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40195 - 37690 "HINFO IN 7330464082475971152.6795589113969959812. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012338521s
	
	
	==> describe nodes <==
	Name:               multinode-396884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-396884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=multinode-396884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_33_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-396884
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:41:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    multinode-396884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbec24e7e0254179ac61d32d838545fa
	  System UUID:                dbec24e7-e025-4179-ac61-d32d838545fa
	  Boot ID:                    b3ec561f-d0ed-473c-918d-183c27fdcf35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pzr7k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 coredns-7c65d6cfc9-qtpcg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m14s
	  kube-system                 etcd-multinode-396884                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m19s
	  kube-system                 kindnet-z4d6c                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m14s
	  kube-system                 kube-apiserver-multinode-396884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-controller-manager-multinode-396884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-proxy-qmlbf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-scheduler-multinode-396884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m13s                  kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node multinode-396884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node multinode-396884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s (x7 over 8m25s)  kubelet          Node multinode-396884 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m18s                  kubelet          Node multinode-396884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s                  kubelet          Node multinode-396884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m18s                  kubelet          Node multinode-396884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m15s                  node-controller  Node multinode-396884 event: Registered Node multinode-396884 in Controller
	  Normal  NodeReady                8m2s                   kubelet          Node multinode-396884 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-396884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-396884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-396884 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                    node-controller  Node multinode-396884 event: Registered Node multinode-396884 in Controller
	
	
	Name:               multinode-396884-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-396884-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=multinode-396884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_40_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:40:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-396884-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:41:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:40:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:40:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:40:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:40:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    multinode-396884-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7181e09ea84e4f039062f92a925a5288
	  System UUID:                7181e09e-a84e-4f03-9062-f92a925a5288
	  Boot ID:                    993e31f9-559d-46fe-89d9-daad60598e95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xptfw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-gtn5l              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m31s
	  kube-system                 kube-proxy-gs2rm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m25s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m31s (x2 over 7m31s)  kubelet     Node multinode-396884-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s (x2 over 7m31s)  kubelet     Node multinode-396884-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s (x2 over 7m31s)  kubelet     Node multinode-396884-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m11s                  kubelet     Node multinode-396884-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-396884-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-396884-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-396884-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-396884-m02 status is now: NodeReady
	
	
	Name:               multinode-396884-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-396884-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=multinode-396884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_41_07_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:41:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-396884-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:41:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:41:25 +0000   Sat, 14 Sep 2024 17:41:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:41:25 +0000   Sat, 14 Sep 2024 17:41:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:41:25 +0000   Sat, 14 Sep 2024 17:41:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:41:25 +0000   Sat, 14 Sep 2024 17:41:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    multinode-396884-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a447eaf560754c06a13d88e42c6551ab
	  System UUID:                a447eaf5-6075-4c06-a13d-88e42c6551ab
	  Boot ID:                    afbf3a3d-44d3-49f6-97f7-d62a72992985
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d8c78       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m34s
	  kube-system                 kube-proxy-mhld5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet          Node multinode-396884-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet          Node multinode-396884-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet          Node multinode-396884-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-396884-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-396884-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-396884-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-396884-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m25s                  kubelet          Node multinode-396884-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-396884-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-396884-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-396884-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-396884-m03 event: Registered Node multinode-396884-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-396884-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.273986] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.124948] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193996] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.132634] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.280269] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.887713] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[Sep14 17:33] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.063629] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.504517] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.082092] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.638361] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.930725] kauditd_printk_skb: 49 callbacks suppressed
	[ +11.769652] kauditd_printk_skb: 38 callbacks suppressed
	[Sep14 17:34] kauditd_printk_skb: 14 callbacks suppressed
	[Sep14 17:39] systemd-fstab-generator[2605]: Ignoring "noauto" option for root device
	[  +0.166264] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.193889] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.151167] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.286690] systemd-fstab-generator[2675]: Ignoring "noauto" option for root device
	[  +0.697946] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +2.237243] systemd-fstab-generator[3176]: Ignoring "noauto" option for root device
	[  +4.713162] kauditd_printk_skb: 204 callbacks suppressed
	[  +7.956382] kauditd_printk_skb: 14 callbacks suppressed
	[Sep14 17:40] systemd-fstab-generator[3747]: Ignoring "noauto" option for root device
	[ +13.522034] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18] <==
	{"level":"info","ts":"2024-09-14T17:39:44.842937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e4e52c0b9ecc5e15","local-member-id":"f9de38f1a7e06692","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:39:44.844294Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:39:44.849070Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:39:44.855885Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T17:39:44.855941Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:39:44.858447Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:39:44.859656Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f9de38f1a7e06692","initial-advertise-peer-urls":["https://192.168.39.202:2380"],"listen-peer-urls":["https://192.168.39.202:2380"],"advertise-client-urls":["https://192.168.39.202:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.202:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T17:39:44.859784Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T17:39:45.896733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T17:39:45.896817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T17:39:45.896835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 received MsgPreVoteResp from f9de38f1a7e06692 at term 2"}
	{"level":"info","ts":"2024-09-14T17:39:45.896846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.896853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 received MsgVoteResp from f9de38f1a7e06692 at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.896862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.896870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f9de38f1a7e06692 elected leader f9de38f1a7e06692 at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.902266Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f9de38f1a7e06692","local-member-attributes":"{Name:multinode-396884 ClientURLs:[https://192.168.39.202:2379]}","request-path":"/0/members/f9de38f1a7e06692/attributes","cluster-id":"e4e52c0b9ecc5e15","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:39:45.902318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:39:45.902665Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:39:45.902779Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:39:45.902778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:39:45.903447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:39:45.903623Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:39:45.904202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T17:39:45.904453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.202:2379"}
	{"level":"info","ts":"2024-09-14T17:41:10.712937Z","caller":"traceutil/trace.go:171","msg":"trace[1755647145] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"102.451879ms","start":"2024-09-14T17:41:10.610456Z","end":"2024-09-14T17:41:10.712907Z","steps":["trace[1755647145] 'process raft request'  (duration: 102.326938ms)"],"step_count":1}
	
	
	==> etcd [6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6] <==
	{"level":"info","ts":"2024-09-14T17:33:05.387740Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:33:05.388339Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:33:05.389034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.202:2379"}
	{"level":"info","ts":"2024-09-14T17:33:05.401099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:33:05.415931Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e4e52c0b9ecc5e15","local-member-id":"f9de38f1a7e06692","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:33:05.416111Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:33:05.416164Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:33:05.418322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-14T17:33:57.375112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.52673ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7391130405298998201 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-396884-m02.17f52cc06c75418b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-396884-m02.17f52cc06c75418b\" value_size:642 lease:7391130405298997200 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-14T17:33:57.375344Z","caller":"traceutil/trace.go:171","msg":"trace[1229652052] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"227.022198ms","start":"2024-09-14T17:33:57.148286Z","end":"2024-09-14T17:33:57.375308Z","steps":["trace[1229652052] 'process raft request'  (duration: 72.812832ms)","trace[1229652052] 'compare'  (duration: 153.375677ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T17:34:01.157943Z","caller":"traceutil/trace.go:171","msg":"trace[1840670995] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"101.528829ms","start":"2024-09-14T17:34:01.056138Z","end":"2024-09-14T17:34:01.157667Z","steps":["trace[1840670995] 'process raft request'  (duration: 101.394036ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:34:54.789491Z","caller":"traceutil/trace.go:171","msg":"trace[1358092839] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"223.163326ms","start":"2024-09-14T17:34:54.566294Z","end":"2024-09-14T17:34:54.789457Z","steps":["trace[1358092839] 'process raft request'  (duration: 222.74671ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:34:57.843843Z","caller":"traceutil/trace.go:171","msg":"trace[1278725833] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"141.276781ms","start":"2024-09-14T17:34:57.702526Z","end":"2024-09-14T17:34:57.843803Z","steps":["trace[1278725833] 'process raft request'  (duration: 141.160755ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:34:58.029754Z","caller":"traceutil/trace.go:171","msg":"trace[557321825] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"180.692568ms","start":"2024-09-14T17:34:57.849043Z","end":"2024-09-14T17:34:58.029735Z","steps":["trace[557321825] 'process raft request'  (duration: 115.31876ms)","trace[557321825] 'compare'  (duration: 65.249533ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T17:34:58.382886Z","caller":"traceutil/trace.go:171","msg":"trace[1617984724] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"125.496502ms","start":"2024-09-14T17:34:58.257373Z","end":"2024-09-14T17:34:58.382869Z","steps":["trace[1617984724] 'process raft request'  (duration: 124.192601ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:38:08.701842Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T17:38:08.701974Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-396884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.202:2380"],"advertise-client-urls":["https://192.168.39.202:2379"]}
	{"level":"warn","ts":"2024-09-14T17:38:08.704635Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:38:08.704750Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:38:08.781743Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.202:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:38:08.781893Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.202:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T17:38:08.783583Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f9de38f1a7e06692","current-leader-member-id":"f9de38f1a7e06692"}
	{"level":"info","ts":"2024-09-14T17:38:08.786487Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:38:08.786605Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:38:08.786628Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-396884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.202:2380"],"advertise-client-urls":["https://192.168.39.202:2379"]}
	
	
	==> kernel <==
	 17:41:28 up 8 min,  0 users,  load average: 0.46, 0.25, 0.12
	Linux multinode-396884 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3] <==
	I0914 17:37:26.538291       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:37:36.537415       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:37:36.537464       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:37:36.537659       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:37:36.537680       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:37:36.537738       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:37:36.537755       1 main.go:299] handling current node
	I0914 17:37:46.539750       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:37:46.539857       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:37:46.539996       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:37:46.540019       1 main.go:299] handling current node
	I0914 17:37:46.540048       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:37:46.540065       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:37:56.534364       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:37:56.534485       1 main.go:299] handling current node
	I0914 17:37:56.534516       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:37:56.534540       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:37:56.534683       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:37:56.534710       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:38:06.536399       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:38:06.536551       1 main.go:299] handling current node
	I0914 17:38:06.536583       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:38:06.536602       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:38:06.536741       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:38:06.536777       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a] <==
	I0914 17:40:39.247591       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:40:49.247316       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:40:49.247392       1 main.go:299] handling current node
	I0914 17:40:49.247414       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:40:49.247419       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:40:49.247570       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:40:49.247591       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:40:59.246695       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:40:59.246909       1 main.go:299] handling current node
	I0914 17:40:59.246949       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:40:59.247013       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:40:59.247418       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:40:59.247563       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:41:09.248472       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:41:09.248595       1 main.go:299] handling current node
	I0914 17:41:09.248627       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:41:09.248645       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:41:09.248812       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:41:09.248862       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.2.0/24] 
	I0914 17:41:19.246616       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:41:19.246771       1 main.go:299] handling current node
	I0914 17:41:19.246818       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:41:19.246882       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:41:19.247284       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:41:19.247381       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f] <==
	W0914 17:38:08.732799       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.732859       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.732923       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733001       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733049       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733092       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733122       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733173       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733279       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733333       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733383       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733339       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733454       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733509       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733577       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733583       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733646       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733181       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733285       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.732927       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733768       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733434       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733826       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733627       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733883       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307] <==
	I0914 17:39:47.209106       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 17:39:47.209296       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 17:39:47.209333       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 17:39:47.209350       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 17:39:47.209560       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 17:39:47.210014       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 17:39:47.210615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 17:39:47.214521       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:39:47.214591       1 policy_source.go:224] refreshing policies
	I0914 17:39:47.214813       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 17:39:47.215385       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 17:39:47.215472       1 aggregator.go:171] initial CRD sync complete...
	I0914 17:39:47.215489       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 17:39:47.215494       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 17:39:47.215499       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:39:47.228887       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0914 17:39:47.250429       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0914 17:39:48.120012       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 17:39:49.529002       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 17:39:49.657732       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 17:39:49.669954       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 17:39:49.765086       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 17:39:49.773373       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:39:50.596396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 17:39:50.843866       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df] <==
	I0914 17:35:43.943119       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-396884-m03\" does not exist"
	I0914 17:35:43.968071       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-396884-m03" podCIDRs=["10.244.3.0/24"]
	I0914 17:35:43.968276       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	E0914 17:35:43.979469       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-396884-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-396884-m03" podCIDRs=["10.244.4.0/24"]
	E0914 17:35:43.979601       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-396884-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-396884-m03"
	E0914 17:35:43.979673       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-396884-m03': failed to patch node CIDR: Node \"multinode-396884-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0914 17:35:43.979740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:43.985502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:44.009277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:44.367030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:48.911723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:54.190199       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:03.377390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:03.377831       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:36:03.389731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:03.863925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:43.889026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:36:43.889367       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m03"
	I0914 17:36:43.912439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:36:43.948150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.102895ms"
	I0914 17:36:43.948500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.785µs"
	I0914 17:36:48.950717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:48.980395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:48.992072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:36:59.069854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	
	
	==> kube-controller-manager [ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15] <==
	I0914 17:40:47.238770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:40:47.249632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:40:47.263579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="126.784µs"
	I0914 17:40:47.277440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.539µs"
	I0914 17:40:50.603563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:40:51.414196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.783178ms"
	I0914 17:40:51.414319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.433µs"
	I0914 17:40:58.027060       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:41:04.969059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:04.991574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:05.225898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:41:05.226023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:06.403353       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:41:06.403403       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-396884-m03\" does not exist"
	I0914 17:41:06.427296       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-396884-m03" podCIDRs=["10.244.2.0/24"]
	I0914 17:41:06.427338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:06.427360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:06.758802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:07.101514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:10.723502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:16.476641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:25.542369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:41:25.542538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:25.556105       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:25.624739       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	
	
	==> kube-proxy [7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:33:15.609534       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:33:15.619500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.202"]
	E0914 17:33:15.619672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:33:15.660609       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:33:15.660650       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:33:15.660679       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:33:15.664325       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:33:15.664673       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:33:15.664729       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:33:15.666142       1 config.go:199] "Starting service config controller"
	I0914 17:33:15.666205       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:33:15.666310       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:33:15.666328       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:33:15.667010       1 config.go:328] "Starting node config controller"
	I0914 17:33:15.667079       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:33:15.767270       1 shared_informer.go:320] Caches are synced for node config
	I0914 17:33:15.767301       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:33:15.767329       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:39:48.725629       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:39:48.736753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.202"]
	E0914 17:39:48.737039       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:39:48.772674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:39:48.772716       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:39:48.772775       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:39:48.775318       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:39:48.775775       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:39:48.775830       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:39:48.777511       1 config.go:199] "Starting service config controller"
	I0914 17:39:48.777602       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:39:48.777655       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:39:48.777675       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:39:48.780063       1 config.go:328] "Starting node config controller"
	I0914 17:39:48.780120       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:39:48.877805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:39:48.877861       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:39:48.880395       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351] <==
	I0914 17:39:45.534456       1 serving.go:386] Generated self-signed cert in-memory
	W0914 17:39:47.164331       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 17:39:47.164409       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 17:39:47.164419       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 17:39:47.164431       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 17:39:47.233922       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 17:39:47.233969       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:39:47.238974       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 17:39:47.239480       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 17:39:47.239596       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 17:39:47.239687       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 17:39:47.340116       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b] <==
	E0914 17:33:07.091612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.923367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 17:33:07.923401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.926897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 17:33:07.926987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.933237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 17:33:07.933362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.959530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 17:33:07.959673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.025378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 17:33:08.025499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.072543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 17:33:08.072736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.076436       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 17:33:08.076557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.117714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 17:33:08.117876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.281413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 17:33:08.281504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.357339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 17:33:08.357437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.395337       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 17:33:08.395478       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 17:33:11.387316       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 17:38:08.701651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 14 17:39:54 multinode-396884 kubelet[3183]: E0914 17:39:54.045271    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335594044545728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:39:56 multinode-396884 kubelet[3183]: I0914 17:39:56.376752    3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 14 17:40:04 multinode-396884 kubelet[3183]: E0914 17:40:04.047357    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335604046844437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:04 multinode-396884 kubelet[3183]: E0914 17:40:04.047437    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335604046844437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:14 multinode-396884 kubelet[3183]: E0914 17:40:14.048965    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335614048541963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:14 multinode-396884 kubelet[3183]: E0914 17:40:14.049006    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335614048541963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:24 multinode-396884 kubelet[3183]: E0914 17:40:24.051637    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335624051170148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:24 multinode-396884 kubelet[3183]: E0914 17:40:24.052189    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335624051170148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:34 multinode-396884 kubelet[3183]: E0914 17:40:34.054670    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335634054263817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:34 multinode-396884 kubelet[3183]: E0914 17:40:34.055321    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335634054263817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:44 multinode-396884 kubelet[3183]: E0914 17:40:44.028534    3183 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:40:44 multinode-396884 kubelet[3183]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:40:44 multinode-396884 kubelet[3183]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:40:44 multinode-396884 kubelet[3183]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:40:44 multinode-396884 kubelet[3183]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:40:44 multinode-396884 kubelet[3183]: E0914 17:40:44.057647    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335644057096602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:44 multinode-396884 kubelet[3183]: E0914 17:40:44.057691    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335644057096602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:54 multinode-396884 kubelet[3183]: E0914 17:40:54.059762    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335654059307946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:40:54 multinode-396884 kubelet[3183]: E0914 17:40:54.060096    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335654059307946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:41:04 multinode-396884 kubelet[3183]: E0914 17:41:04.064638    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335664062748307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:41:04 multinode-396884 kubelet[3183]: E0914 17:41:04.064697    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335664062748307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:41:14 multinode-396884 kubelet[3183]: E0914 17:41:14.068710    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335674068261954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:41:14 multinode-396884 kubelet[3183]: E0914 17:41:14.068737    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335674068261954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:41:24 multinode-396884 kubelet[3183]: E0914 17:41:24.071519    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335684070870254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:41:24 multinode-396884 kubelet[3183]: E0914 17:41:24.071589    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335684070870254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:41:28.008302   46861 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19643-8806/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-396884 -n multinode-396884
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-396884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 stop
E0914 17:41:45.625312   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:42:08.013323   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396884 stop: exit status 82 (2m0.458840175s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-396884-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-396884 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396884 status: exit status 3 (18.803589979s)

                                                
                                                
-- stdout --
	multinode-396884
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396884-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:43:51.330512   47524 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0914 17:43:51.330586   47524 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-396884 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-396884 -n multinode-396884
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-396884 logs -n 25: (1.383243679s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884:/home/docker/cp-test_multinode-396884-m02_multinode-396884.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884 sudo cat                                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m02_multinode-396884.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03:/home/docker/cp-test_multinode-396884-m02_multinode-396884-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884-m03 sudo cat                                   | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m02_multinode-396884-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp testdata/cp-test.txt                                                | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3813016810/001/cp-test_multinode-396884-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884:/home/docker/cp-test_multinode-396884-m03_multinode-396884.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884 sudo cat                                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m03_multinode-396884.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02:/home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884-m02 sudo cat                                   | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-396884 node stop m03                                                          | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	| node    | multinode-396884 node start                                                             | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	| stop    | -p multinode-396884                                                                     | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	| start   | -p multinode-396884                                                                     | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:38 UTC | 14 Sep 24 17:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC |                     |
	| node    | multinode-396884 node delete                                                            | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC | 14 Sep 24 17:41 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-396884 stop                                                                   | multinode-396884 | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:38:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:38:07.838462   45790 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:38:07.838603   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:38:07.838612   45790 out.go:358] Setting ErrFile to fd 2...
	I0914 17:38:07.838618   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:38:07.838812   45790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:38:07.839410   45790 out.go:352] Setting JSON to false
	I0914 17:38:07.840312   45790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4832,"bootTime":1726330656,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:38:07.840412   45790 start.go:139] virtualization: kvm guest
	I0914 17:38:07.842624   45790 out.go:177] * [multinode-396884] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:38:07.843996   45790 notify.go:220] Checking for updates...
	I0914 17:38:07.844005   45790 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:38:07.845513   45790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:38:07.847624   45790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:38:07.849112   45790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:38:07.850621   45790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:38:07.852360   45790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:38:07.854027   45790 config.go:182] Loaded profile config "multinode-396884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:38:07.854176   45790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:38:07.854687   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:38:07.854737   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:38:07.870940   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0914 17:38:07.871492   45790 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:38:07.872381   45790 main.go:141] libmachine: Using API Version  1
	I0914 17:38:07.872401   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:38:07.872881   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:38:07.873136   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:38:07.909711   45790 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 17:38:07.911131   45790 start.go:297] selected driver: kvm2
	I0914 17:38:07.911148   45790 start.go:901] validating driver "kvm2" against &{Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:38:07.911382   45790 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:38:07.911896   45790 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:38:07.912009   45790 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:38:07.927317   45790 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:38:07.928050   45790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:38:07.928097   45790 cni.go:84] Creating CNI manager for ""
	I0914 17:38:07.928155   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 17:38:07.928233   45790 start.go:340] cluster config:
	{Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-396884 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:38:07.928391   45790 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:38:07.930510   45790 out.go:177] * Starting "multinode-396884" primary control-plane node in "multinode-396884" cluster
	I0914 17:38:07.931777   45790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:38:07.931826   45790 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 17:38:07.931839   45790 cache.go:56] Caching tarball of preloaded images
	I0914 17:38:07.931927   45790 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:38:07.931940   45790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 17:38:07.932070   45790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/config.json ...
	I0914 17:38:07.932273   45790 start.go:360] acquireMachinesLock for multinode-396884: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:38:07.932335   45790 start.go:364] duration metric: took 34.328µs to acquireMachinesLock for "multinode-396884"
	I0914 17:38:07.932353   45790 start.go:96] Skipping create...Using existing machine configuration
	I0914 17:38:07.932362   45790 fix.go:54] fixHost starting: 
	I0914 17:38:07.932619   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:38:07.932650   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:38:07.947920   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0914 17:38:07.948374   45790 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:38:07.948874   45790 main.go:141] libmachine: Using API Version  1
	I0914 17:38:07.948888   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:38:07.949189   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:38:07.949387   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:38:07.949619   45790 main.go:141] libmachine: (multinode-396884) Calling .GetState
	I0914 17:38:07.951620   45790 fix.go:112] recreateIfNeeded on multinode-396884: state=Running err=<nil>
	W0914 17:38:07.951638   45790 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 17:38:07.953791   45790 out.go:177] * Updating the running kvm2 "multinode-396884" VM ...
	I0914 17:38:07.955308   45790 machine.go:93] provisionDockerMachine start ...
	I0914 17:38:07.955338   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:38:07.955640   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:07.958963   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:07.959588   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:07.959615   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:07.959818   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:07.959991   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:07.960157   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:07.960292   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:07.960441   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:07.960645   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:07.960655   45790 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 17:38:08.075396   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-396884
	
	I0914 17:38:08.075496   45790 main.go:141] libmachine: (multinode-396884) Calling .GetMachineName
	I0914 17:38:08.075749   45790 buildroot.go:166] provisioning hostname "multinode-396884"
	I0914 17:38:08.075772   45790 main.go:141] libmachine: (multinode-396884) Calling .GetMachineName
	I0914 17:38:08.075986   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.078608   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.079064   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.079082   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.079272   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.079431   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.079560   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.079692   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.079915   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:08.080093   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:08.080106   45790 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-396884 && echo "multinode-396884" | sudo tee /etc/hostname
	I0914 17:38:08.212819   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-396884
	
	I0914 17:38:08.212843   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.215872   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.216273   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.216302   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.216521   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.216728   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.216916   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.217067   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.217284   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:08.217454   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:08.217470   45790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-396884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-396884/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-396884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:38:08.330993   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:38:08.331027   45790 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:38:08.331045   45790 buildroot.go:174] setting up certificates
	I0914 17:38:08.331053   45790 provision.go:84] configureAuth start
	I0914 17:38:08.331077   45790 main.go:141] libmachine: (multinode-396884) Calling .GetMachineName
	I0914 17:38:08.331366   45790 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:38:08.334046   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.334543   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.334573   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.334744   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.337137   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.337493   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.337525   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.337680   45790 provision.go:143] copyHostCerts
	I0914 17:38:08.337703   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:38:08.337739   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:38:08.337749   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:38:08.337814   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:38:08.337899   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:38:08.337917   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:38:08.337921   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:38:08.337952   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:38:08.338005   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:38:08.338020   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:38:08.338025   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:38:08.338045   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:38:08.338103   45790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.multinode-396884 san=[127.0.0.1 192.168.39.202 localhost minikube multinode-396884]
	I0914 17:38:08.406730   45790 provision.go:177] copyRemoteCerts
	I0914 17:38:08.406788   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:38:08.406809   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.409289   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.409633   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.409666   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.409817   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.409974   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.410120   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.410248   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:38:08.496346   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 17:38:08.496409   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:38:08.524163   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 17:38:08.524241   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0914 17:38:08.547276   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 17:38:08.547362   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:38:08.570239   45790 provision.go:87] duration metric: took 239.172443ms to configureAuth
	I0914 17:38:08.570272   45790 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:38:08.570514   45790 config.go:182] Loaded profile config "multinode-396884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:38:08.570601   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:38:08.572979   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.573288   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:38:08.573310   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:38:08.573555   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:38:08.573728   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.573888   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:38:08.574014   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:38:08.574207   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:38:08.574372   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:38:08.574386   45790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:39:39.376907   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:39:39.376935   45790 machine.go:96] duration metric: took 1m31.421606261s to provisionDockerMachine
	I0914 17:39:39.376951   45790 start.go:293] postStartSetup for "multinode-396884" (driver="kvm2")
	I0914 17:39:39.376964   45790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:39:39.376978   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.377238   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:39:39.377263   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.380978   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.381373   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.381391   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.381711   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.381920   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.382090   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.382242   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:39:39.469587   45790 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:39:39.473918   45790 command_runner.go:130] > NAME=Buildroot
	I0914 17:39:39.473945   45790 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0914 17:39:39.473952   45790 command_runner.go:130] > ID=buildroot
	I0914 17:39:39.473960   45790 command_runner.go:130] > VERSION_ID=2023.02.9
	I0914 17:39:39.473968   45790 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0914 17:39:39.474000   45790 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:39:39.474020   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:39:39.474104   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:39:39.474223   45790 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:39:39.474234   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /etc/ssl/certs/160162.pem
	I0914 17:39:39.474325   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:39:39.483641   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:39:39.506857   45790 start.go:296] duration metric: took 129.893198ms for postStartSetup
	I0914 17:39:39.506901   45790 fix.go:56] duration metric: took 1m31.574538507s for fixHost
	I0914 17:39:39.506922   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.509726   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.510104   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.510136   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.510331   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.510516   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.510647   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.510745   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.510873   45790 main.go:141] libmachine: Using SSH client type: native
	I0914 17:39:39.511027   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0914 17:39:39.511037   45790 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:39:39.622796   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726335579.589914886
	
	I0914 17:39:39.622820   45790 fix.go:216] guest clock: 1726335579.589914886
	I0914 17:39:39.622835   45790 fix.go:229] Guest: 2024-09-14 17:39:39.589914886 +0000 UTC Remote: 2024-09-14 17:39:39.506905311 +0000 UTC m=+91.705736536 (delta=83.009575ms)
	I0914 17:39:39.622858   45790 fix.go:200] guest clock delta is within tolerance: 83.009575ms
	I0914 17:39:39.622863   45790 start.go:83] releasing machines lock for "multinode-396884", held for 1m31.690518254s
	I0914 17:39:39.622884   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.623103   45790 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:39:39.625950   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.626329   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.626354   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.626543   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.626965   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.627134   45790 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:39:39.627254   45790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:39:39.627302   45790 ssh_runner.go:195] Run: cat /version.json
	I0914 17:39:39.627325   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.627306   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:39:39.630009   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630136   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630524   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.630552   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630577   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:39.630593   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:39.630709   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.630869   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.630888   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:39:39.631041   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:39:39.631059   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.631187   45790 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:39:39.631317   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:39:39.631328   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:39:39.719974   45790 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726243947-19640", "minikube_version": "v1.34.0", "commit": "e811e8872a58983cadac51ebe65d77fb02f32a08"}
	I0914 17:39:39.752106   45790 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 17:39:39.752907   45790 ssh_runner.go:195] Run: systemctl --version
	I0914 17:39:39.758907   45790 command_runner.go:130] > systemd 252 (252)
	I0914 17:39:39.758940   45790 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0914 17:39:39.759037   45790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:39:39.915840   45790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 17:39:39.925133   45790 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 17:39:39.925169   45790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:39:39.925224   45790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:39:39.934706   45790 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 17:39:39.934728   45790 start.go:495] detecting cgroup driver to use...
	I0914 17:39:39.934797   45790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:39:39.953064   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:39:39.967741   45790 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:39:39.967798   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:39:39.982169   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:39:39.996132   45790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:39:40.147051   45790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:39:40.307874   45790 docker.go:233] disabling docker service ...
	I0914 17:39:40.307950   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:39:40.327589   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:39:40.341745   45790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:39:40.493489   45790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:39:40.647227   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:39:40.662532   45790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:39:40.681643   45790 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0914 17:39:40.681703   45790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 17:39:40.681748   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.692618   45790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:39:40.692685   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.703395   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.713647   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.725550   45790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:39:40.738012   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.748792   45790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.759254   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:39:40.769560   45790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:39:40.779762   45790 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 17:39:40.779827   45790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:39:40.789791   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:39:40.932537   45790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:39:41.151605   45790 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:39:41.151685   45790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:39:41.157394   45790 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0914 17:39:41.157435   45790 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 17:39:41.157445   45790 command_runner.go:130] > Device: 0,22	Inode: 1306        Links: 1
	I0914 17:39:41.157456   45790 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 17:39:41.157464   45790 command_runner.go:130] > Access: 2024-09-14 17:39:41.039445574 +0000
	I0914 17:39:41.157472   45790 command_runner.go:130] > Modify: 2024-09-14 17:39:40.990444441 +0000
	I0914 17:39:41.157480   45790 command_runner.go:130] > Change: 2024-09-14 17:39:40.990444441 +0000
	I0914 17:39:41.157487   45790 command_runner.go:130] >  Birth: -
	I0914 17:39:41.157523   45790 start.go:563] Will wait 60s for crictl version
	I0914 17:39:41.157583   45790 ssh_runner.go:195] Run: which crictl
	I0914 17:39:41.161496   45790 command_runner.go:130] > /usr/bin/crictl
	I0914 17:39:41.161557   45790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:39:41.201332   45790 command_runner.go:130] > Version:  0.1.0
	I0914 17:39:41.201363   45790 command_runner.go:130] > RuntimeName:  cri-o
	I0914 17:39:41.201371   45790 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0914 17:39:41.201461   45790 command_runner.go:130] > RuntimeApiVersion:  v1
	I0914 17:39:41.202795   45790 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:39:41.202873   45790 ssh_runner.go:195] Run: crio --version
	I0914 17:39:41.236356   45790 command_runner.go:130] > crio version 1.29.1
	I0914 17:39:41.236380   45790 command_runner.go:130] > Version:        1.29.1
	I0914 17:39:41.236394   45790 command_runner.go:130] > GitCommit:      unknown
	I0914 17:39:41.236403   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0914 17:39:41.236409   45790 command_runner.go:130] > GitTreeState:   clean
	I0914 17:39:41.236429   45790 command_runner.go:130] > BuildDate:      2024-09-14T08:18:37Z
	I0914 17:39:41.236434   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 17:39:41.236438   45790 command_runner.go:130] > Compiler:       gc
	I0914 17:39:41.236448   45790 command_runner.go:130] > Platform:       linux/amd64
	I0914 17:39:41.236453   45790 command_runner.go:130] > Linkmode:       dynamic
	I0914 17:39:41.236467   45790 command_runner.go:130] > BuildTags:      
	I0914 17:39:41.236475   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0914 17:39:41.236479   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 17:39:41.236484   45790 command_runner.go:130] >   btrfs_noversion
	I0914 17:39:41.236488   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 17:39:41.236493   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 17:39:41.236496   45790 command_runner.go:130] >   seccomp
	I0914 17:39:41.236503   45790 command_runner.go:130] > LDFlags:          unknown
	I0914 17:39:41.236507   45790 command_runner.go:130] > SeccompEnabled:   true
	I0914 17:39:41.236511   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0914 17:39:41.236579   45790 ssh_runner.go:195] Run: crio --version
	I0914 17:39:41.263512   45790 command_runner.go:130] > crio version 1.29.1
	I0914 17:39:41.263533   45790 command_runner.go:130] > Version:        1.29.1
	I0914 17:39:41.263538   45790 command_runner.go:130] > GitCommit:      unknown
	I0914 17:39:41.263543   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0914 17:39:41.263547   45790 command_runner.go:130] > GitTreeState:   clean
	I0914 17:39:41.263552   45790 command_runner.go:130] > BuildDate:      2024-09-14T08:18:37Z
	I0914 17:39:41.263556   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0914 17:39:41.263560   45790 command_runner.go:130] > Compiler:       gc
	I0914 17:39:41.263573   45790 command_runner.go:130] > Platform:       linux/amd64
	I0914 17:39:41.263577   45790 command_runner.go:130] > Linkmode:       dynamic
	I0914 17:39:41.263592   45790 command_runner.go:130] > BuildTags:      
	I0914 17:39:41.263596   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0914 17:39:41.263601   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0914 17:39:41.263606   45790 command_runner.go:130] >   btrfs_noversion
	I0914 17:39:41.263611   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0914 17:39:41.263617   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0914 17:39:41.263621   45790 command_runner.go:130] >   seccomp
	I0914 17:39:41.263625   45790 command_runner.go:130] > LDFlags:          unknown
	I0914 17:39:41.263641   45790 command_runner.go:130] > SeccompEnabled:   true
	I0914 17:39:41.263648   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0914 17:39:41.266908   45790 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 17:39:41.268257   45790 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:39:41.270873   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:41.271243   45790 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:39:41.271268   45790 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:39:41.271565   45790 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:39:41.275800   45790 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0914 17:39:41.275936   45790 kubeadm.go:883] updating cluster {Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:39:41.276082   45790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 17:39:41.276126   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:39:41.316246   45790 command_runner.go:130] > {
	I0914 17:39:41.316271   45790 command_runner.go:130] >   "images": [
	I0914 17:39:41.316277   45790 command_runner.go:130] >     {
	I0914 17:39:41.316307   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 17:39:41.316314   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316322   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 17:39:41.316328   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316333   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316344   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 17:39:41.316354   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 17:39:41.316371   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316378   45790 command_runner.go:130] >       "size": "87190579",
	I0914 17:39:41.316382   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316388   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316393   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316399   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316403   45790 command_runner.go:130] >     },
	I0914 17:39:41.316408   45790 command_runner.go:130] >     {
	I0914 17:39:41.316413   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 17:39:41.316420   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316433   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 17:39:41.316442   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316448   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316463   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 17:39:41.316475   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 17:39:41.316482   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316486   45790 command_runner.go:130] >       "size": "1363676",
	I0914 17:39:41.316492   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316498   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316504   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316508   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316513   45790 command_runner.go:130] >     },
	I0914 17:39:41.316517   45790 command_runner.go:130] >     {
	I0914 17:39:41.316525   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 17:39:41.316533   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316544   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 17:39:41.316552   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316562   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316574   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 17:39:41.316585   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 17:39:41.316591   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316596   45790 command_runner.go:130] >       "size": "31470524",
	I0914 17:39:41.316602   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316606   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316611   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316616   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316623   45790 command_runner.go:130] >     },
	I0914 17:39:41.316632   45790 command_runner.go:130] >     {
	I0914 17:39:41.316645   45790 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 17:39:41.316655   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316666   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 17:39:41.316675   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316683   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316696   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 17:39:41.316714   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 17:39:41.316722   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316728   45790 command_runner.go:130] >       "size": "63273227",
	I0914 17:39:41.316736   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.316741   45790 command_runner.go:130] >       "username": "nonroot",
	I0914 17:39:41.316750   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316756   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316763   45790 command_runner.go:130] >     },
	I0914 17:39:41.316768   45790 command_runner.go:130] >     {
	I0914 17:39:41.316780   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 17:39:41.316789   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316796   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 17:39:41.316804   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316810   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316823   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 17:39:41.316836   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 17:39:41.316842   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316851   45790 command_runner.go:130] >       "size": "149009664",
	I0914 17:39:41.316857   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.316866   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.316873   45790 command_runner.go:130] >       },
	I0914 17:39:41.316883   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.316892   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.316901   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.316909   45790 command_runner.go:130] >     },
	I0914 17:39:41.316914   45790 command_runner.go:130] >     {
	I0914 17:39:41.316923   45790 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 17:39:41.316927   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.316932   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 17:39:41.316937   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316941   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.316951   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 17:39:41.316970   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 17:39:41.316979   45790 command_runner.go:130] >       ],
	I0914 17:39:41.316984   45790 command_runner.go:130] >       "size": "95237600",
	I0914 17:39:41.316990   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.316999   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.317005   45790 command_runner.go:130] >       },
	I0914 17:39:41.317014   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317020   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317026   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317034   45790 command_runner.go:130] >     },
	I0914 17:39:41.317040   45790 command_runner.go:130] >     {
	I0914 17:39:41.317052   45790 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 17:39:41.317065   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317074   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 17:39:41.317078   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317082   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317092   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 17:39:41.317100   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 17:39:41.317106   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317110   45790 command_runner.go:130] >       "size": "89437508",
	I0914 17:39:41.317113   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.317117   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.317121   45790 command_runner.go:130] >       },
	I0914 17:39:41.317125   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317129   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317133   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317136   45790 command_runner.go:130] >     },
	I0914 17:39:41.317139   45790 command_runner.go:130] >     {
	I0914 17:39:41.317145   45790 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 17:39:41.317151   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317156   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 17:39:41.317159   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317163   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317189   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 17:39:41.317199   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 17:39:41.317202   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317207   45790 command_runner.go:130] >       "size": "92733849",
	I0914 17:39:41.317211   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.317214   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317218   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317222   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317225   45790 command_runner.go:130] >     },
	I0914 17:39:41.317227   45790 command_runner.go:130] >     {
	I0914 17:39:41.317233   45790 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 17:39:41.317237   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317241   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 17:39:41.317245   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317248   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317258   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 17:39:41.317268   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 17:39:41.317274   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317280   45790 command_runner.go:130] >       "size": "68420934",
	I0914 17:39:41.317291   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.317295   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.317298   45790 command_runner.go:130] >       },
	I0914 17:39:41.317301   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317305   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317308   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.317311   45790 command_runner.go:130] >     },
	I0914 17:39:41.317315   45790 command_runner.go:130] >     {
	I0914 17:39:41.317320   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 17:39:41.317326   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.317332   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 17:39:41.317337   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317343   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.317353   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 17:39:41.317370   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 17:39:41.317378   45790 command_runner.go:130] >       ],
	I0914 17:39:41.317382   45790 command_runner.go:130] >       "size": "742080",
	I0914 17:39:41.317386   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.317390   45790 command_runner.go:130] >         "value": "65535"
	I0914 17:39:41.317393   45790 command_runner.go:130] >       },
	I0914 17:39:41.317397   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.317401   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.317404   45790 command_runner.go:130] >       "pinned": true
	I0914 17:39:41.317408   45790 command_runner.go:130] >     }
	I0914 17:39:41.317411   45790 command_runner.go:130] >   ]
	I0914 17:39:41.317414   45790 command_runner.go:130] > }
	I0914 17:39:41.317649   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:39:41.317667   45790 crio.go:433] Images already preloaded, skipping extraction
	I0914 17:39:41.317728   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:39:41.355211   45790 command_runner.go:130] > {
	I0914 17:39:41.355232   45790 command_runner.go:130] >   "images": [
	I0914 17:39:41.355238   45790 command_runner.go:130] >     {
	I0914 17:39:41.355248   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0914 17:39:41.355255   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355263   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0914 17:39:41.355268   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355273   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355285   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0914 17:39:41.355296   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0914 17:39:41.355307   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355313   45790 command_runner.go:130] >       "size": "87190579",
	I0914 17:39:41.355319   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355324   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355337   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355348   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355355   45790 command_runner.go:130] >     },
	I0914 17:39:41.355361   45790 command_runner.go:130] >     {
	I0914 17:39:41.355379   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0914 17:39:41.355388   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355397   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0914 17:39:41.355404   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355411   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355424   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0914 17:39:41.355447   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0914 17:39:41.355456   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355463   45790 command_runner.go:130] >       "size": "1363676",
	I0914 17:39:41.355469   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355484   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355493   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355499   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355506   45790 command_runner.go:130] >     },
	I0914 17:39:41.355512   45790 command_runner.go:130] >     {
	I0914 17:39:41.355523   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0914 17:39:41.355531   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355540   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0914 17:39:41.355549   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355557   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355582   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0914 17:39:41.355593   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0914 17:39:41.355599   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355607   45790 command_runner.go:130] >       "size": "31470524",
	I0914 17:39:41.355617   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355626   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355635   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355643   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355651   45790 command_runner.go:130] >     },
	I0914 17:39:41.355657   45790 command_runner.go:130] >     {
	I0914 17:39:41.355671   45790 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0914 17:39:41.355680   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355688   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0914 17:39:41.355703   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355713   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355729   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0914 17:39:41.355755   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0914 17:39:41.355763   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355771   45790 command_runner.go:130] >       "size": "63273227",
	I0914 17:39:41.355781   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.355794   45790 command_runner.go:130] >       "username": "nonroot",
	I0914 17:39:41.355803   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355813   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355821   45790 command_runner.go:130] >     },
	I0914 17:39:41.355827   45790 command_runner.go:130] >     {
	I0914 17:39:41.355839   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0914 17:39:41.355848   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.355855   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0914 17:39:41.355864   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355872   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.355893   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0914 17:39:41.355908   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0914 17:39:41.355915   45790 command_runner.go:130] >       ],
	I0914 17:39:41.355924   45790 command_runner.go:130] >       "size": "149009664",
	I0914 17:39:41.355932   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.355939   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.355946   45790 command_runner.go:130] >       },
	I0914 17:39:41.355954   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.355962   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.355969   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.355978   45790 command_runner.go:130] >     },
	I0914 17:39:41.355983   45790 command_runner.go:130] >     {
	I0914 17:39:41.355996   45790 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0914 17:39:41.356006   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356022   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0914 17:39:41.356030   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356043   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356058   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0914 17:39:41.356073   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0914 17:39:41.356081   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356088   45790 command_runner.go:130] >       "size": "95237600",
	I0914 17:39:41.356097   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356105   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.356113   45790 command_runner.go:130] >       },
	I0914 17:39:41.356120   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356129   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356138   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356145   45790 command_runner.go:130] >     },
	I0914 17:39:41.356152   45790 command_runner.go:130] >     {
	I0914 17:39:41.356164   45790 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0914 17:39:41.356173   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356197   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0914 17:39:41.356205   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356213   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356229   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0914 17:39:41.356244   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0914 17:39:41.356255   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356264   45790 command_runner.go:130] >       "size": "89437508",
	I0914 17:39:41.356272   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356279   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.356287   45790 command_runner.go:130] >       },
	I0914 17:39:41.356294   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356303   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356310   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356316   45790 command_runner.go:130] >     },
	I0914 17:39:41.356324   45790 command_runner.go:130] >     {
	I0914 17:39:41.356334   45790 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0914 17:39:41.356343   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356352   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0914 17:39:41.356366   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356376   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356405   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0914 17:39:41.356419   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0914 17:39:41.356425   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356433   45790 command_runner.go:130] >       "size": "92733849",
	I0914 17:39:41.356443   45790 command_runner.go:130] >       "uid": null,
	I0914 17:39:41.356451   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356459   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356469   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356477   45790 command_runner.go:130] >     },
	I0914 17:39:41.356485   45790 command_runner.go:130] >     {
	I0914 17:39:41.356496   45790 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0914 17:39:41.356504   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356514   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0914 17:39:41.356523   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356530   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356545   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0914 17:39:41.356562   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0914 17:39:41.356576   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356583   45790 command_runner.go:130] >       "size": "68420934",
	I0914 17:39:41.356592   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356599   45790 command_runner.go:130] >         "value": "0"
	I0914 17:39:41.356606   45790 command_runner.go:130] >       },
	I0914 17:39:41.356614   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356622   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356629   45790 command_runner.go:130] >       "pinned": false
	I0914 17:39:41.356635   45790 command_runner.go:130] >     },
	I0914 17:39:41.356643   45790 command_runner.go:130] >     {
	I0914 17:39:41.356657   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0914 17:39:41.356666   45790 command_runner.go:130] >       "repoTags": [
	I0914 17:39:41.356674   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0914 17:39:41.356682   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356698   45790 command_runner.go:130] >       "repoDigests": [
	I0914 17:39:41.356712   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0914 17:39:41.356730   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0914 17:39:41.356738   45790 command_runner.go:130] >       ],
	I0914 17:39:41.356745   45790 command_runner.go:130] >       "size": "742080",
	I0914 17:39:41.356753   45790 command_runner.go:130] >       "uid": {
	I0914 17:39:41.356761   45790 command_runner.go:130] >         "value": "65535"
	I0914 17:39:41.356769   45790 command_runner.go:130] >       },
	I0914 17:39:41.356777   45790 command_runner.go:130] >       "username": "",
	I0914 17:39:41.356786   45790 command_runner.go:130] >       "spec": null,
	I0914 17:39:41.356793   45790 command_runner.go:130] >       "pinned": true
	I0914 17:39:41.356801   45790 command_runner.go:130] >     }
	I0914 17:39:41.356807   45790 command_runner.go:130] >   ]
	I0914 17:39:41.356814   45790 command_runner.go:130] > }
	I0914 17:39:41.356953   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 17:39:41.356965   45790 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:39:41.356973   45790 kubeadm.go:934] updating node { 192.168.39.202 8443 v1.31.1 crio true true} ...
	I0914 17:39:41.357103   45790 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-396884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:39:41.357181   45790 ssh_runner.go:195] Run: crio config
	I0914 17:39:41.394664   45790 command_runner.go:130] ! time="2024-09-14 17:39:41.361792585Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0914 17:39:41.399863   45790 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0914 17:39:41.407712   45790 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0914 17:39:41.407735   45790 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0914 17:39:41.407744   45790 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0914 17:39:41.407749   45790 command_runner.go:130] > #
	I0914 17:39:41.407758   45790 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0914 17:39:41.407767   45790 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0914 17:39:41.407775   45790 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0914 17:39:41.407790   45790 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0914 17:39:41.407797   45790 command_runner.go:130] > # reload'.
	I0914 17:39:41.407810   45790 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0914 17:39:41.407821   45790 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0914 17:39:41.407833   45790 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0914 17:39:41.407846   45790 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0914 17:39:41.407856   45790 command_runner.go:130] > [crio]
	I0914 17:39:41.407867   45790 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0914 17:39:41.407876   45790 command_runner.go:130] > # containers images, in this directory.
	I0914 17:39:41.407884   45790 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0914 17:39:41.407898   45790 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0914 17:39:41.407907   45790 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0914 17:39:41.407920   45790 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0914 17:39:41.407930   45790 command_runner.go:130] > # imagestore = ""
	I0914 17:39:41.407941   45790 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0914 17:39:41.407954   45790 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0914 17:39:41.407964   45790 command_runner.go:130] > storage_driver = "overlay"
	I0914 17:39:41.407974   45790 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0914 17:39:41.407986   45790 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0914 17:39:41.408001   45790 command_runner.go:130] > storage_option = [
	I0914 17:39:41.408012   45790 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0914 17:39:41.408018   45790 command_runner.go:130] > ]
	I0914 17:39:41.408029   45790 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0914 17:39:41.408042   45790 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0914 17:39:41.408052   45790 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0914 17:39:41.408064   45790 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0914 17:39:41.408076   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0914 17:39:41.408084   45790 command_runner.go:130] > # always happen on a node reboot
	I0914 17:39:41.408095   45790 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0914 17:39:41.408116   45790 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0914 17:39:41.408128   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0914 17:39:41.408140   45790 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0914 17:39:41.408151   45790 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0914 17:39:41.408165   45790 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0914 17:39:41.408184   45790 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0914 17:39:41.408193   45790 command_runner.go:130] > # internal_wipe = true
	I0914 17:39:41.408207   45790 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0914 17:39:41.408219   45790 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0914 17:39:41.408229   45790 command_runner.go:130] > # internal_repair = false
	I0914 17:39:41.408240   45790 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0914 17:39:41.408253   45790 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0914 17:39:41.408264   45790 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0914 17:39:41.408275   45790 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0914 17:39:41.408289   45790 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0914 17:39:41.408297   45790 command_runner.go:130] > [crio.api]
	I0914 17:39:41.408307   45790 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0914 17:39:41.408316   45790 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0914 17:39:41.408323   45790 command_runner.go:130] > # IP address on which the stream server will listen.
	I0914 17:39:41.408329   45790 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0914 17:39:41.408339   45790 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0914 17:39:41.408349   45790 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0914 17:39:41.408358   45790 command_runner.go:130] > # stream_port = "0"
	I0914 17:39:41.408376   45790 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0914 17:39:41.408385   45790 command_runner.go:130] > # stream_enable_tls = false
	I0914 17:39:41.408395   45790 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0914 17:39:41.408405   45790 command_runner.go:130] > # stream_idle_timeout = ""
	I0914 17:39:41.408417   45790 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0914 17:39:41.408429   45790 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0914 17:39:41.408437   45790 command_runner.go:130] > # minutes.
	I0914 17:39:41.408446   45790 command_runner.go:130] > # stream_tls_cert = ""
	I0914 17:39:41.408458   45790 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0914 17:39:41.408470   45790 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0914 17:39:41.408480   45790 command_runner.go:130] > # stream_tls_key = ""
	I0914 17:39:41.408490   45790 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0914 17:39:41.408504   45790 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0914 17:39:41.408534   45790 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0914 17:39:41.408544   45790 command_runner.go:130] > # stream_tls_ca = ""
	I0914 17:39:41.408556   45790 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 17:39:41.408575   45790 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0914 17:39:41.408590   45790 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0914 17:39:41.408599   45790 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0914 17:39:41.408609   45790 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0914 17:39:41.408622   45790 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0914 17:39:41.408631   45790 command_runner.go:130] > [crio.runtime]
	I0914 17:39:41.408643   45790 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0914 17:39:41.408655   45790 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0914 17:39:41.408665   45790 command_runner.go:130] > # "nofile=1024:2048"
	I0914 17:39:41.408683   45790 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0914 17:39:41.408693   45790 command_runner.go:130] > # default_ulimits = [
	I0914 17:39:41.408699   45790 command_runner.go:130] > # ]
	I0914 17:39:41.408712   45790 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0914 17:39:41.408721   45790 command_runner.go:130] > # no_pivot = false
	I0914 17:39:41.408738   45790 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0914 17:39:41.408752   45790 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0914 17:39:41.408762   45790 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0914 17:39:41.408780   45790 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0914 17:39:41.408790   45790 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0914 17:39:41.408802   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 17:39:41.408813   45790 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0914 17:39:41.408821   45790 command_runner.go:130] > # Cgroup setting for conmon
	I0914 17:39:41.408835   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0914 17:39:41.408844   45790 command_runner.go:130] > conmon_cgroup = "pod"
	I0914 17:39:41.408855   45790 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0914 17:39:41.408866   45790 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0914 17:39:41.408879   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0914 17:39:41.408886   45790 command_runner.go:130] > conmon_env = [
	I0914 17:39:41.408899   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 17:39:41.408906   45790 command_runner.go:130] > ]
	I0914 17:39:41.408915   45790 command_runner.go:130] > # Additional environment variables to set for all the
	I0914 17:39:41.408926   45790 command_runner.go:130] > # containers. These are overridden if set in the
	I0914 17:39:41.408937   45790 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0914 17:39:41.408946   45790 command_runner.go:130] > # default_env = [
	I0914 17:39:41.408952   45790 command_runner.go:130] > # ]
	I0914 17:39:41.408961   45790 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0914 17:39:41.408974   45790 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0914 17:39:41.408983   45790 command_runner.go:130] > # selinux = false
	I0914 17:39:41.408994   45790 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0914 17:39:41.409007   45790 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0914 17:39:41.409019   45790 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0914 17:39:41.409028   45790 command_runner.go:130] > # seccomp_profile = ""
	I0914 17:39:41.409038   45790 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0914 17:39:41.409049   45790 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0914 17:39:41.409060   45790 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0914 17:39:41.409070   45790 command_runner.go:130] > # which might increase security.
	I0914 17:39:41.409079   45790 command_runner.go:130] > # This option is currently deprecated,
	I0914 17:39:41.409091   45790 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0914 17:39:41.409111   45790 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0914 17:39:41.409124   45790 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0914 17:39:41.409142   45790 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0914 17:39:41.409158   45790 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0914 17:39:41.409171   45790 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0914 17:39:41.409183   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.409193   45790 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0914 17:39:41.409203   45790 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0914 17:39:41.409213   45790 command_runner.go:130] > # the cgroup blockio controller.
	I0914 17:39:41.409222   45790 command_runner.go:130] > # blockio_config_file = ""
	I0914 17:39:41.409233   45790 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0914 17:39:41.409243   45790 command_runner.go:130] > # blockio parameters.
	I0914 17:39:41.409251   45790 command_runner.go:130] > # blockio_reload = false
	I0914 17:39:41.409264   45790 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0914 17:39:41.409274   45790 command_runner.go:130] > # irqbalance daemon.
	I0914 17:39:41.409284   45790 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0914 17:39:41.409296   45790 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0914 17:39:41.409308   45790 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0914 17:39:41.409323   45790 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0914 17:39:41.409335   45790 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0914 17:39:41.409349   45790 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0914 17:39:41.409360   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.409368   45790 command_runner.go:130] > # rdt_config_file = ""
	I0914 17:39:41.409379   45790 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0914 17:39:41.409387   45790 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0914 17:39:41.409428   45790 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0914 17:39:41.409439   45790 command_runner.go:130] > # separate_pull_cgroup = ""
	I0914 17:39:41.409449   45790 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0914 17:39:41.409462   45790 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0914 17:39:41.409468   45790 command_runner.go:130] > # will be added.
	I0914 17:39:41.409479   45790 command_runner.go:130] > # default_capabilities = [
	I0914 17:39:41.409487   45790 command_runner.go:130] > # 	"CHOWN",
	I0914 17:39:41.409494   45790 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0914 17:39:41.409501   45790 command_runner.go:130] > # 	"FSETID",
	I0914 17:39:41.409510   45790 command_runner.go:130] > # 	"FOWNER",
	I0914 17:39:41.409528   45790 command_runner.go:130] > # 	"SETGID",
	I0914 17:39:41.409537   45790 command_runner.go:130] > # 	"SETUID",
	I0914 17:39:41.409543   45790 command_runner.go:130] > # 	"SETPCAP",
	I0914 17:39:41.409550   45790 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0914 17:39:41.409564   45790 command_runner.go:130] > # 	"KILL",
	I0914 17:39:41.409572   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409586   45790 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0914 17:39:41.409598   45790 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0914 17:39:41.409612   45790 command_runner.go:130] > # add_inheritable_capabilities = false
	I0914 17:39:41.409626   45790 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0914 17:39:41.409638   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 17:39:41.409648   45790 command_runner.go:130] > default_sysctls = [
	I0914 17:39:41.409657   45790 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0914 17:39:41.409664   45790 command_runner.go:130] > ]
	I0914 17:39:41.409681   45790 command_runner.go:130] > # List of devices on the host that a
	I0914 17:39:41.409694   45790 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0914 17:39:41.409703   45790 command_runner.go:130] > # allowed_devices = [
	I0914 17:39:41.409710   45790 command_runner.go:130] > # 	"/dev/fuse",
	I0914 17:39:41.409718   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409728   45790 command_runner.go:130] > # List of additional devices. specified as
	I0914 17:39:41.409742   45790 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0914 17:39:41.409754   45790 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0914 17:39:41.409767   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0914 17:39:41.409777   45790 command_runner.go:130] > # additional_devices = [
	I0914 17:39:41.409784   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409794   45790 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0914 17:39:41.409804   45790 command_runner.go:130] > # cdi_spec_dirs = [
	I0914 17:39:41.409812   45790 command_runner.go:130] > # 	"/etc/cdi",
	I0914 17:39:41.409821   45790 command_runner.go:130] > # 	"/var/run/cdi",
	I0914 17:39:41.409827   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409840   45790 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0914 17:39:41.409852   45790 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0914 17:39:41.409862   45790 command_runner.go:130] > # Defaults to false.
	I0914 17:39:41.409877   45790 command_runner.go:130] > # device_ownership_from_security_context = false
	I0914 17:39:41.409891   45790 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0914 17:39:41.409903   45790 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0914 17:39:41.409911   45790 command_runner.go:130] > # hooks_dir = [
	I0914 17:39:41.409920   45790 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0914 17:39:41.409929   45790 command_runner.go:130] > # ]
	I0914 17:39:41.409939   45790 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0914 17:39:41.409952   45790 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0914 17:39:41.409961   45790 command_runner.go:130] > # its default mounts from the following two files:
	I0914 17:39:41.409969   45790 command_runner.go:130] > #
	I0914 17:39:41.409980   45790 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0914 17:39:41.409993   45790 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0914 17:39:41.410005   45790 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0914 17:39:41.410010   45790 command_runner.go:130] > #
	I0914 17:39:41.410020   45790 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0914 17:39:41.410034   45790 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0914 17:39:41.410048   45790 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0914 17:39:41.410058   45790 command_runner.go:130] > #      only add mounts it finds in this file.
	I0914 17:39:41.410065   45790 command_runner.go:130] > #
	I0914 17:39:41.410074   45790 command_runner.go:130] > # default_mounts_file = ""
	I0914 17:39:41.410086   45790 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0914 17:39:41.410100   45790 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0914 17:39:41.410109   45790 command_runner.go:130] > pids_limit = 1024
	I0914 17:39:41.410120   45790 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0914 17:39:41.410132   45790 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0914 17:39:41.410146   45790 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0914 17:39:41.410171   45790 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0914 17:39:41.410181   45790 command_runner.go:130] > # log_size_max = -1
	I0914 17:39:41.410193   45790 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0914 17:39:41.410203   45790 command_runner.go:130] > # log_to_journald = false
	I0914 17:39:41.410215   45790 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0914 17:39:41.410226   45790 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0914 17:39:41.410235   45790 command_runner.go:130] > # Path to directory for container attach sockets.
	I0914 17:39:41.410252   45790 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0914 17:39:41.410264   45790 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0914 17:39:41.410274   45790 command_runner.go:130] > # bind_mount_prefix = ""
	I0914 17:39:41.410286   45790 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0914 17:39:41.410296   45790 command_runner.go:130] > # read_only = false
	I0914 17:39:41.410308   45790 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0914 17:39:41.410319   45790 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0914 17:39:41.410326   45790 command_runner.go:130] > # live configuration reload.
	I0914 17:39:41.410335   45790 command_runner.go:130] > # log_level = "info"
	I0914 17:39:41.410345   45790 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0914 17:39:41.410356   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.410365   45790 command_runner.go:130] > # log_filter = ""
	I0914 17:39:41.410377   45790 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0914 17:39:41.410392   45790 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0914 17:39:41.410401   45790 command_runner.go:130] > # separated by comma.
	I0914 17:39:41.410415   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410424   45790 command_runner.go:130] > # uid_mappings = ""
	I0914 17:39:41.410433   45790 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0914 17:39:41.410454   45790 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0914 17:39:41.410464   45790 command_runner.go:130] > # separated by comma.
	I0914 17:39:41.410477   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410489   45790 command_runner.go:130] > # gid_mappings = ""
	I0914 17:39:41.410502   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0914 17:39:41.410514   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 17:39:41.410524   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 17:39:41.410540   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410549   45790 command_runner.go:130] > # minimum_mappable_uid = -1
	I0914 17:39:41.410563   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0914 17:39:41.410576   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0914 17:39:41.410592   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0914 17:39:41.410607   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0914 17:39:41.410614   45790 command_runner.go:130] > # minimum_mappable_gid = -1
	I0914 17:39:41.410628   45790 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0914 17:39:41.410647   45790 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0914 17:39:41.410659   45790 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0914 17:39:41.410668   45790 command_runner.go:130] > # ctr_stop_timeout = 30
	I0914 17:39:41.410678   45790 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0914 17:39:41.410690   45790 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0914 17:39:41.410699   45790 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0914 17:39:41.410710   45790 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0914 17:39:41.410718   45790 command_runner.go:130] > drop_infra_ctr = false
	I0914 17:39:41.410729   45790 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0914 17:39:41.410740   45790 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0914 17:39:41.410755   45790 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0914 17:39:41.410765   45790 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0914 17:39:41.410778   45790 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0914 17:39:41.410788   45790 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0914 17:39:41.410801   45790 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0914 17:39:41.410812   45790 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0914 17:39:41.410822   45790 command_runner.go:130] > # shared_cpuset = ""
	I0914 17:39:41.410835   45790 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0914 17:39:41.410846   45790 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0914 17:39:41.410857   45790 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0914 17:39:41.410872   45790 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0914 17:39:41.410882   45790 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0914 17:39:41.410893   45790 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0914 17:39:41.410909   45790 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0914 17:39:41.410919   45790 command_runner.go:130] > # enable_criu_support = false
	I0914 17:39:41.410930   45790 command_runner.go:130] > # Enable/disable the generation of the container,
	I0914 17:39:41.410943   45790 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0914 17:39:41.410953   45790 command_runner.go:130] > # enable_pod_events = false
	I0914 17:39:41.410966   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 17:39:41.410980   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0914 17:39:41.410991   45790 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0914 17:39:41.411000   45790 command_runner.go:130] > # default_runtime = "runc"
	I0914 17:39:41.411010   45790 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0914 17:39:41.411035   45790 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0914 17:39:41.411053   45790 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0914 17:39:41.411064   45790 command_runner.go:130] > # creation as a file is not desired either.
	I0914 17:39:41.411080   45790 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0914 17:39:41.411091   45790 command_runner.go:130] > # the hostname is being managed dynamically.
	I0914 17:39:41.411100   45790 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0914 17:39:41.411108   45790 command_runner.go:130] > # ]
	I0914 17:39:41.411118   45790 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0914 17:39:41.411131   45790 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0914 17:39:41.411144   45790 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0914 17:39:41.411162   45790 command_runner.go:130] > # Each entry in the table should follow the format:
	I0914 17:39:41.411171   45790 command_runner.go:130] > #
	I0914 17:39:41.411181   45790 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0914 17:39:41.411192   45790 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0914 17:39:41.411249   45790 command_runner.go:130] > # runtime_type = "oci"
	I0914 17:39:41.411260   45790 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0914 17:39:41.411269   45790 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0914 17:39:41.411280   45790 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0914 17:39:41.411287   45790 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0914 17:39:41.411294   45790 command_runner.go:130] > # monitor_env = []
	I0914 17:39:41.411305   45790 command_runner.go:130] > # privileged_without_host_devices = false
	I0914 17:39:41.411312   45790 command_runner.go:130] > # allowed_annotations = []
	I0914 17:39:41.411321   45790 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0914 17:39:41.411329   45790 command_runner.go:130] > # Where:
	I0914 17:39:41.411339   45790 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0914 17:39:41.411352   45790 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0914 17:39:41.411364   45790 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0914 17:39:41.411380   45790 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0914 17:39:41.411389   45790 command_runner.go:130] > #   in $PATH.
	I0914 17:39:41.411401   45790 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0914 17:39:41.411412   45790 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0914 17:39:41.411425   45790 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0914 17:39:41.411434   45790 command_runner.go:130] > #   state.
	I0914 17:39:41.411451   45790 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0914 17:39:41.411463   45790 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0914 17:39:41.411473   45790 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0914 17:39:41.411485   45790 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0914 17:39:41.411498   45790 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0914 17:39:41.411511   45790 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0914 17:39:41.411522   45790 command_runner.go:130] > #   The currently recognized values are:
	I0914 17:39:41.411533   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0914 17:39:41.411548   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0914 17:39:41.411565   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0914 17:39:41.411578   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0914 17:39:41.411593   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0914 17:39:41.411607   45790 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0914 17:39:41.411621   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0914 17:39:41.411635   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0914 17:39:41.411646   45790 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0914 17:39:41.411660   45790 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0914 17:39:41.411671   45790 command_runner.go:130] > #   deprecated option "conmon".
	I0914 17:39:41.411683   45790 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0914 17:39:41.411695   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0914 17:39:41.411709   45790 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0914 17:39:41.411718   45790 command_runner.go:130] > #   should be moved to the container's cgroup
	I0914 17:39:41.411730   45790 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0914 17:39:41.411741   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0914 17:39:41.411755   45790 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0914 17:39:41.411766   45790 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0914 17:39:41.411774   45790 command_runner.go:130] > #
	I0914 17:39:41.411782   45790 command_runner.go:130] > # Using the seccomp notifier feature:
	I0914 17:39:41.411794   45790 command_runner.go:130] > #
	I0914 17:39:41.411806   45790 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0914 17:39:41.411818   45790 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0914 17:39:41.411825   45790 command_runner.go:130] > #
	I0914 17:39:41.411836   45790 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0914 17:39:41.411856   45790 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0914 17:39:41.411864   45790 command_runner.go:130] > #
	I0914 17:39:41.411874   45790 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0914 17:39:41.411882   45790 command_runner.go:130] > # feature.
	I0914 17:39:41.411888   45790 command_runner.go:130] > #
	I0914 17:39:41.411900   45790 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0914 17:39:41.411913   45790 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0914 17:39:41.411927   45790 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0914 17:39:41.411940   45790 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0914 17:39:41.411953   45790 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0914 17:39:41.411961   45790 command_runner.go:130] > #
	I0914 17:39:41.411971   45790 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0914 17:39:41.411984   45790 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0914 17:39:41.411991   45790 command_runner.go:130] > #
	I0914 17:39:41.412002   45790 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0914 17:39:41.412014   45790 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0914 17:39:41.412021   45790 command_runner.go:130] > #
	I0914 17:39:41.412032   45790 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0914 17:39:41.412044   45790 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0914 17:39:41.412053   45790 command_runner.go:130] > # limitation.
	I0914 17:39:41.412062   45790 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0914 17:39:41.412073   45790 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0914 17:39:41.412082   45790 command_runner.go:130] > runtime_type = "oci"
	I0914 17:39:41.412090   45790 command_runner.go:130] > runtime_root = "/run/runc"
	I0914 17:39:41.412099   45790 command_runner.go:130] > runtime_config_path = ""
	I0914 17:39:41.412109   45790 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0914 17:39:41.412119   45790 command_runner.go:130] > monitor_cgroup = "pod"
	I0914 17:39:41.412129   45790 command_runner.go:130] > monitor_exec_cgroup = ""
	I0914 17:39:41.412136   45790 command_runner.go:130] > monitor_env = [
	I0914 17:39:41.412149   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0914 17:39:41.412157   45790 command_runner.go:130] > ]
	I0914 17:39:41.412166   45790 command_runner.go:130] > privileged_without_host_devices = false
	I0914 17:39:41.412179   45790 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0914 17:39:41.412197   45790 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0914 17:39:41.412211   45790 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0914 17:39:41.412226   45790 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0914 17:39:41.412244   45790 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0914 17:39:41.412256   45790 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0914 17:39:41.412274   45790 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0914 17:39:41.412289   45790 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0914 17:39:41.412302   45790 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0914 17:39:41.412315   45790 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0914 17:39:41.412322   45790 command_runner.go:130] > # Example:
	I0914 17:39:41.412333   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0914 17:39:41.412342   45790 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0914 17:39:41.412353   45790 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0914 17:39:41.412363   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0914 17:39:41.412370   45790 command_runner.go:130] > # cpuset = 0
	I0914 17:39:41.412378   45790 command_runner.go:130] > # cpushares = "0-1"
	I0914 17:39:41.412386   45790 command_runner.go:130] > # Where:
	I0914 17:39:41.412394   45790 command_runner.go:130] > # The workload name is workload-type.
	I0914 17:39:41.412408   45790 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0914 17:39:41.412418   45790 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0914 17:39:41.412430   45790 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0914 17:39:41.412446   45790 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0914 17:39:41.412459   45790 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0914 17:39:41.412470   45790 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0914 17:39:41.412484   45790 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0914 17:39:41.412493   45790 command_runner.go:130] > # Default value is set to true
	I0914 17:39:41.412501   45790 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0914 17:39:41.412513   45790 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0914 17:39:41.412524   45790 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0914 17:39:41.412535   45790 command_runner.go:130] > # Default value is set to 'false'
	I0914 17:39:41.412546   45790 command_runner.go:130] > # disable_hostport_mapping = false
	I0914 17:39:41.412563   45790 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0914 17:39:41.412571   45790 command_runner.go:130] > #
	I0914 17:39:41.412585   45790 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0914 17:39:41.412593   45790 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0914 17:39:41.412608   45790 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0914 17:39:41.412616   45790 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0914 17:39:41.412628   45790 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0914 17:39:41.412635   45790 command_runner.go:130] > [crio.image]
	I0914 17:39:41.412649   45790 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0914 17:39:41.412657   45790 command_runner.go:130] > # default_transport = "docker://"
	I0914 17:39:41.412666   45790 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0914 17:39:41.412676   45790 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0914 17:39:41.412683   45790 command_runner.go:130] > # global_auth_file = ""
	I0914 17:39:41.412690   45790 command_runner.go:130] > # The image used to instantiate infra containers.
	I0914 17:39:41.412699   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.412706   45790 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0914 17:39:41.412718   45790 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0914 17:39:41.412727   45790 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0914 17:39:41.412736   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0914 17:39:41.412742   45790 command_runner.go:130] > # pause_image_auth_file = ""
	I0914 17:39:41.412751   45790 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0914 17:39:41.412760   45790 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0914 17:39:41.412770   45790 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0914 17:39:41.412779   45790 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0914 17:39:41.412786   45790 command_runner.go:130] > # pause_command = "/pause"
	I0914 17:39:41.412795   45790 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0914 17:39:41.412805   45790 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0914 17:39:41.412814   45790 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0914 17:39:41.412832   45790 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0914 17:39:41.412845   45790 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0914 17:39:41.412858   45790 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0914 17:39:41.412868   45790 command_runner.go:130] > # pinned_images = [
	I0914 17:39:41.412876   45790 command_runner.go:130] > # ]
	I0914 17:39:41.412887   45790 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0914 17:39:41.412898   45790 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0914 17:39:41.412919   45790 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0914 17:39:41.412932   45790 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0914 17:39:41.412944   45790 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0914 17:39:41.412953   45790 command_runner.go:130] > # signature_policy = ""
	I0914 17:39:41.412965   45790 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0914 17:39:41.412977   45790 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0914 17:39:41.412990   45790 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0914 17:39:41.413006   45790 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0914 17:39:41.413018   45790 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0914 17:39:41.413030   45790 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0914 17:39:41.413042   45790 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0914 17:39:41.413055   45790 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0914 17:39:41.413062   45790 command_runner.go:130] > # changing them here.
	I0914 17:39:41.413071   45790 command_runner.go:130] > # insecure_registries = [
	I0914 17:39:41.413077   45790 command_runner.go:130] > # ]
	I0914 17:39:41.413088   45790 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0914 17:39:41.413099   45790 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0914 17:39:41.413109   45790 command_runner.go:130] > # image_volumes = "mkdir"
	I0914 17:39:41.413118   45790 command_runner.go:130] > # Temporary directory to use for storing big files
	I0914 17:39:41.413129   45790 command_runner.go:130] > # big_files_temporary_dir = ""
	I0914 17:39:41.413141   45790 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0914 17:39:41.413148   45790 command_runner.go:130] > # CNI plugins.
	I0914 17:39:41.413158   45790 command_runner.go:130] > [crio.network]
	I0914 17:39:41.413169   45790 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0914 17:39:41.413181   45790 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0914 17:39:41.413192   45790 command_runner.go:130] > # cni_default_network = ""
	I0914 17:39:41.413205   45790 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0914 17:39:41.413215   45790 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0914 17:39:41.413226   45790 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0914 17:39:41.413234   45790 command_runner.go:130] > # plugin_dirs = [
	I0914 17:39:41.413242   45790 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0914 17:39:41.413248   45790 command_runner.go:130] > # ]
	I0914 17:39:41.413260   45790 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0914 17:39:41.413278   45790 command_runner.go:130] > [crio.metrics]
	I0914 17:39:41.413289   45790 command_runner.go:130] > # Globally enable or disable metrics support.
	I0914 17:39:41.413297   45790 command_runner.go:130] > enable_metrics = true
	I0914 17:39:41.413306   45790 command_runner.go:130] > # Specify enabled metrics collectors.
	I0914 17:39:41.413316   45790 command_runner.go:130] > # Per default all metrics are enabled.
	I0914 17:39:41.413326   45790 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0914 17:39:41.413345   45790 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0914 17:39:41.413358   45790 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0914 17:39:41.413367   45790 command_runner.go:130] > # metrics_collectors = [
	I0914 17:39:41.413374   45790 command_runner.go:130] > # 	"operations",
	I0914 17:39:41.413384   45790 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0914 17:39:41.413392   45790 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0914 17:39:41.413402   45790 command_runner.go:130] > # 	"operations_errors",
	I0914 17:39:41.413412   45790 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0914 17:39:41.413421   45790 command_runner.go:130] > # 	"image_pulls_by_name",
	I0914 17:39:41.413429   45790 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0914 17:39:41.413441   45790 command_runner.go:130] > # 	"image_pulls_failures",
	I0914 17:39:41.413450   45790 command_runner.go:130] > # 	"image_pulls_successes",
	I0914 17:39:41.413458   45790 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0914 17:39:41.413466   45790 command_runner.go:130] > # 	"image_layer_reuse",
	I0914 17:39:41.413474   45790 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0914 17:39:41.413484   45790 command_runner.go:130] > # 	"containers_oom_total",
	I0914 17:39:41.413492   45790 command_runner.go:130] > # 	"containers_oom",
	I0914 17:39:41.413501   45790 command_runner.go:130] > # 	"processes_defunct",
	I0914 17:39:41.413520   45790 command_runner.go:130] > # 	"operations_total",
	I0914 17:39:41.413531   45790 command_runner.go:130] > # 	"operations_latency_seconds",
	I0914 17:39:41.413541   45790 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0914 17:39:41.413549   45790 command_runner.go:130] > # 	"operations_errors_total",
	I0914 17:39:41.413563   45790 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0914 17:39:41.413573   45790 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0914 17:39:41.413581   45790 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0914 17:39:41.413591   45790 command_runner.go:130] > # 	"image_pulls_success_total",
	I0914 17:39:41.413601   45790 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0914 17:39:41.413616   45790 command_runner.go:130] > # 	"containers_oom_count_total",
	I0914 17:39:41.413627   45790 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0914 17:39:41.413638   45790 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0914 17:39:41.413644   45790 command_runner.go:130] > # ]
	I0914 17:39:41.413652   45790 command_runner.go:130] > # The port on which the metrics server will listen.
	I0914 17:39:41.413663   45790 command_runner.go:130] > # metrics_port = 9090
	I0914 17:39:41.413674   45790 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0914 17:39:41.413681   45790 command_runner.go:130] > # metrics_socket = ""
	I0914 17:39:41.413692   45790 command_runner.go:130] > # The certificate for the secure metrics server.
	I0914 17:39:41.413704   45790 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0914 17:39:41.413717   45790 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0914 17:39:41.413728   45790 command_runner.go:130] > # certificate on any modification event.
	I0914 17:39:41.413735   45790 command_runner.go:130] > # metrics_cert = ""
	I0914 17:39:41.413745   45790 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0914 17:39:41.413756   45790 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0914 17:39:41.413765   45790 command_runner.go:130] > # metrics_key = ""
	I0914 17:39:41.413777   45790 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0914 17:39:41.413785   45790 command_runner.go:130] > [crio.tracing]
	I0914 17:39:41.413794   45790 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0914 17:39:41.413803   45790 command_runner.go:130] > # enable_tracing = false
	I0914 17:39:41.413813   45790 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0914 17:39:41.413824   45790 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0914 17:39:41.413837   45790 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0914 17:39:41.413848   45790 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0914 17:39:41.413858   45790 command_runner.go:130] > # CRI-O NRI configuration.
	I0914 17:39:41.413865   45790 command_runner.go:130] > [crio.nri]
	I0914 17:39:41.413875   45790 command_runner.go:130] > # Globally enable or disable NRI.
	I0914 17:39:41.413883   45790 command_runner.go:130] > # enable_nri = false
	I0914 17:39:41.413895   45790 command_runner.go:130] > # NRI socket to listen on.
	I0914 17:39:41.413905   45790 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0914 17:39:41.413915   45790 command_runner.go:130] > # NRI plugin directory to use.
	I0914 17:39:41.413924   45790 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0914 17:39:41.413935   45790 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0914 17:39:41.413954   45790 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0914 17:39:41.413967   45790 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0914 17:39:41.413977   45790 command_runner.go:130] > # nri_disable_connections = false
	I0914 17:39:41.413984   45790 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0914 17:39:41.413996   45790 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0914 17:39:41.414005   45790 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0914 17:39:41.414015   45790 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0914 17:39:41.414027   45790 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0914 17:39:41.414035   45790 command_runner.go:130] > [crio.stats]
	I0914 17:39:41.414045   45790 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0914 17:39:41.414056   45790 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0914 17:39:41.414066   45790 command_runner.go:130] > # stats_collection_period = 0
	I0914 17:39:41.414202   45790 cni.go:84] Creating CNI manager for ""
	I0914 17:39:41.414220   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 17:39:41.414236   45790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:39:41.414256   45790 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-396884 NodeName:multinode-396884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:39:41.414408   45790 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-396884"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:39:41.414475   45790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:39:41.424527   45790 command_runner.go:130] > kubeadm
	I0914 17:39:41.424547   45790 command_runner.go:130] > kubectl
	I0914 17:39:41.424555   45790 command_runner.go:130] > kubelet
	I0914 17:39:41.424598   45790 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:39:41.424647   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 17:39:41.433668   45790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 17:39:41.450425   45790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:39:41.466569   45790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0914 17:39:41.483294   45790 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I0914 17:39:41.487170   45790 command_runner.go:130] > 192.168.39.202	control-plane.minikube.internal
	I0914 17:39:41.487281   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:39:41.631325   45790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:39:41.645740   45790 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884 for IP: 192.168.39.202
	I0914 17:39:41.645759   45790 certs.go:194] generating shared ca certs ...
	I0914 17:39:41.645778   45790 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:39:41.645931   45790 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:39:41.645997   45790 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:39:41.646016   45790 certs.go:256] generating profile certs ...
	I0914 17:39:41.646115   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/client.key
	I0914 17:39:41.646199   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.key.347dc4ff
	I0914 17:39:41.646259   45790 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.key
	I0914 17:39:41.646273   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 17:39:41.646294   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 17:39:41.646333   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 17:39:41.646352   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 17:39:41.646367   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 17:39:41.646394   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 17:39:41.646413   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 17:39:41.646429   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 17:39:41.646497   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:39:41.646536   45790 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:39:41.646549   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:39:41.646594   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:39:41.646627   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:39:41.646662   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:39:41.646716   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:39:41.646761   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.646780   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem -> /usr/share/ca-certificates/16016.pem
	I0914 17:39:41.646803   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.648082   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:39:41.673629   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:39:41.696244   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:39:41.719185   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:39:41.741665   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 17:39:41.764322   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:39:41.786944   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:39:41.810016   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/multinode-396884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 17:39:41.833918   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:39:41.856513   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:39:41.879583   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:39:41.903897   45790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:39:41.920358   45790 ssh_runner.go:195] Run: openssl version
	I0914 17:39:41.926145   45790 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0914 17:39:41.926266   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:39:41.936553   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.940863   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.941034   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.941090   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:39:41.946396   45790 command_runner.go:130] > 3ec20f2e
	I0914 17:39:41.946563   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:39:41.955883   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:39:41.967080   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.972110   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.972204   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.972254   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:39:41.977901   45790 command_runner.go:130] > b5213941
	I0914 17:39:41.977987   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:39:41.988167   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:39:41.999538   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.004108   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.004345   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.004406   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:39:42.009803   45790 command_runner.go:130] > 51391683
	I0914 17:39:42.009979   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:39:42.019163   45790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:39:42.023367   45790 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:39:42.023393   45790 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0914 17:39:42.023401   45790 command_runner.go:130] > Device: 253,1	Inode: 6289960     Links: 1
	I0914 17:39:42.023411   45790 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 17:39:42.023423   45790 command_runner.go:130] > Access: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023432   45790 command_runner.go:130] > Modify: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023440   45790 command_runner.go:130] > Change: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023448   45790 command_runner.go:130] >  Birth: 2024-09-14 17:33:01.245158305 +0000
	I0914 17:39:42.023518   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 17:39:42.028932   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.029135   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 17:39:42.034621   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.034693   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 17:39:42.039883   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.040179   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 17:39:42.045353   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.045530   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 17:39:42.051207   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.051274   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 17:39:42.056685   45790 command_runner.go:130] > Certificate will not expire
	I0914 17:39:42.056847   45790 kubeadm.go:392] StartCluster: {Name:multinode-396884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-396884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:39:42.057000   45790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:39:42.057055   45790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:39:42.091160   45790 command_runner.go:130] > 7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c
	I0914 17:39:42.091191   45790 command_runner.go:130] > 7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c
	I0914 17:39:42.091201   45790 command_runner.go:130] > e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3
	I0914 17:39:42.091210   45790 command_runner.go:130] > 7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382
	I0914 17:39:42.091219   45790 command_runner.go:130] > 5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b
	I0914 17:39:42.091228   45790 command_runner.go:130] > b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df
	I0914 17:39:42.091237   45790 command_runner.go:130] > 0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f
	I0914 17:39:42.091250   45790 command_runner.go:130] > 6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6
	I0914 17:39:42.091270   45790 cri.go:89] found id: "7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c"
	I0914 17:39:42.091277   45790 cri.go:89] found id: "7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c"
	I0914 17:39:42.091280   45790 cri.go:89] found id: "e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3"
	I0914 17:39:42.091286   45790 cri.go:89] found id: "7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382"
	I0914 17:39:42.091291   45790 cri.go:89] found id: "5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b"
	I0914 17:39:42.091294   45790 cri.go:89] found id: "b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df"
	I0914 17:39:42.091297   45790 cri.go:89] found id: "0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f"
	I0914 17:39:42.091300   45790 cri.go:89] found id: "6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6"
	I0914 17:39:42.091329   45790 cri.go:89] found id: ""
	I0914 17:39:42.091383   45790 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.960616695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335831960590327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c6dcbec-7e9e-40c6-a152-b8d1d324f005 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.961093436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2daf9566-000d-41bd-9067-8d36b377eb8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.961160456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2daf9566-000d-41bd-9067-8d36b377eb8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.961535427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2daf9566-000d-41bd-9067-8d36b377eb8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.979457520Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34a1a621-685f-4af3-9c26-ff6d317453a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.979719354Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-pzr7k,Uid:d987f1f7-c417-47ff-bf9e-c8aeff216125,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335616054979802,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:39:47.961779417Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-396884,Uid:342268f2615b90c8e7af26c283cd51b1,Namespace:kube-system,Attempt:
1,},State:SANDBOX_READY,CreatedAt:1726335582335280216,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.202:8443,kubernetes.io/config.hash: 342268f2615b90c8e7af26c283cd51b1,kubernetes.io/config.seen: 2024-09-14T17:33:09.855277536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qtpcg,Uid:13529408-14c2-4b62-8089-9c2842942ddd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335582324304047,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:33:26.702670031Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-396884,Uid:b301887ecb32aa4527128a8e7072c3ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335582316453535,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b301887ecb32aa4527128a8e7072c3ec,kubernetes.io/config.seen: 2024-09-14T17:33:09.855270339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14603a5b9b94
08fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&PodSandboxMetadata{Name:kindnet-z4d6c,Uid:effa9e73-ccda-4492-969d-fadbf8054d16,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335582310583234,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:33:14.779716249Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&PodSandboxMetadata{Name:etcd-multinode-396884,Uid:831f0a541da6b9f9926e0f36ffcd8217,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335582298475524,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.na
me: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.202:2379,kubernetes.io/config.hash: 831f0a541da6b9f9926e0f36ffcd8217,kubernetes.io/config.seen: 2024-09-14T17:33:09.855276467Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-396884,Uid:aad7806168f922aedac7c9352d482fc7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335582297428550,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aad7806168f922a
edac7c9352d482fc7,kubernetes.io/config.seen: 2024-09-14T17:33:09.855275203Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmlbf,Uid:51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726335582253082773,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T17:33:14.779783043Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:90e4e4a9-b67b-4f18-8c77-5caccac87a1a,Namespace:kube-system,Attempt:1,},S
tate:SANDBOX_READY,CreatedAt:1726335582246400154,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"
/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T17:33:26.693889693Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=34a1a621-685f-4af3-9c26-ff6d317453a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.980424538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f48e53d-0134-4389-ab8e-3d5263c34448 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.980481136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f48e53d-0134-4389-ab8e-3d5263c34448 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:51 multinode-396884 crio[2683]: time="2024-09-14 17:43:51.980671738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f48e53d-0134-4389-ab8e-3d5263c34448 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.000689409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85edb9fa-1b15-40b7-8f97-a801ed2f5d89 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.000783463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85edb9fa-1b15-40b7-8f97-a801ed2f5d89 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.002404499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13dac14e-4b30-42aa-aab9-1af7b82068d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.003342876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335832003314110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13dac14e-4b30-42aa-aab9-1af7b82068d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.005180979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c2cf393-3dae-44ac-9cb4-ad10c8f9f819 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.005292224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c2cf393-3dae-44ac-9cb4-ad10c8f9f819 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.005717311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c2cf393-3dae-44ac-9cb4-ad10c8f9f819 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.043896045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb50ce81-124a-4823-8bf3-552890b977f8 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.044014716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb50ce81-124a-4823-8bf3-552890b977f8 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.047045348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=888d80a3-8e6e-4423-8170-c69ef6dbc338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.047737968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335832047681867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=888d80a3-8e6e-4423-8170-c69ef6dbc338 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.048525128Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=59d6af0c-c51b-4e0d-afe1-c6bd78533f23 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.048597460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59d6af0c-c51b-4e0d-afe1-c6bd78533f23 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.049056559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=856acd3d-253b-42b6-8a2b-217417ffc559 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.049137112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=856acd3d-253b-42b6-8a2b-217417ffc559 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:43:52 multinode-396884 crio[2683]: time="2024-09-14 17:43:52.049487680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37156bea17af948c73f9db1576deb474807ffab67606772dd45a53bb46466f7b,PodSandboxId:1244304d2327d4ca96f9dab53897270221bd32419f7fea1de407009297b15eed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726335616192564384,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a,PodSandboxId:14603a5b9b9408fcfd78636f81733a13d313a772f550615ff0f6549f766439d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726335588311575741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b,PodSandboxId:c608d166e9cfc40559f4dd60682e535bef80339cc05f2d92bb8a8350ab8cf5dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726335588354802246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30,PodSandboxId:ae272afecc7dc9234d085a47aaf7f28ea47cfd632ca9b39ce4636321d2dc2b3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726335588344900527,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9-a0cf6078cc3b,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fd47ab3d7c09b43503888ee6e717dc4029824bd3cf102a57b12f7da49cc824,PodSandboxId:44ff81b98a114b0cf7e0950f46bdaea4f3b4b72413c249aa4fa1b0334aeaab1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726335588359084081,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15,PodSandboxId:a4cd5972443f168b3571e8b0550994530aba19bcbd02275afdb2a23c710107bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726335584419784393,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a8e7072c3ec,},Annotations:map[string
]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351,PodSandboxId:ab897e6a5ff609c2669de03faceb1ee55721d946de43422df637770b243798ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726335584436197078,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,},Annotations:map[string]string{io.kub
ernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18,PodSandboxId:5e42fc4c6013d338009371f6e10074ee267ff21cb005cdb5442c2cb447dc043a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726335584403779987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307,PodSandboxId:6c45ca539be4df4195df7d0f62df9f16607f25ad66a6b30154bdf1b60615a57d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726335584426132500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8338e88fc1b00c174df10b94f7a54081a5c6acdbab875508b5434f77cb7ae14,PodSandboxId:6ed76286a869683a9ffd5f5f55a8adec237ace2318b40db692ef32c3776fae42,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726335264253530601,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-pzr7k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d987f1f7-c417-47ff-bf9e-c8aeff216125,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c,PodSandboxId:cd1d8929e4d25040b30e14825a30ee8976a19180c03418bf616c73633a034b77,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726335207428200278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qtpcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13529408-14c2-4b62-8089-9c2842942ddd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e78c4f8c735e3a5fe2823017862c3af278bec6d973c484caa0dbc504f5de21c,PodSandboxId:c4e259c738185ce125a2640f7c8f00a0d334e28fd116b1ff3fed6693c59bd27b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726335207096833878,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 90e4e4a9-b67b-4f18-8c77-5caccac87a1a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3,PodSandboxId:d9d680aef76b132627444945b0b3b7a86c7925f6dc74bed56bedf11c10a108bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726335195465082311,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z4d6c,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: effa9e73-ccda-4492-969d-fadbf8054d16,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382,PodSandboxId:4576692beffea39fc5e0a6e06be363320bfdb75335e63b181910ef4e7de71067,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726335195384963971,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmlbf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51c467d5-cdb4-4d97-81e9
-a0cf6078cc3b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b,PodSandboxId:5175a3a2c4a6c507f605270b58d2309ee6fee67da64c6d2897ef82057b3c76ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726335184498002618,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aad7806168f922aedac7c9352d482fc7,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df,PodSandboxId:4a30deaeaaf32abd67a48479a463abd3ad638a8d294cf52b027f68841c4d9927,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726335184481155648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b301887ecb32aa4527128a
8e7072c3ec,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f,PodSandboxId:9d3c73752580a9d069b6b778a3aa8d14a016a60e885e1334863acdef0818f1c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726335184451145589,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 342268f2615b90c8e7af26c283cd51b1,},An
notations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6,PodSandboxId:a383e42333e08fb468bcd50c8cb9b248f480b53fc88a8bdc1aa32e71fae0adba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726335184409475185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-396884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 831f0a541da6b9f9926e0f36ffcd8217,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=856acd3d-253b-42b6-8a2b-217417ffc559 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37156bea17af9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1244304d2327d       busybox-7dff88458-pzr7k
	76fd47ab3d7c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   44ff81b98a114       storage-provisioner
	c2fc74db4fa9b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   c608d166e9cfc       coredns-7c65d6cfc9-qtpcg
	bdfae496458ec       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   ae272afecc7dc       kube-proxy-qmlbf
	fef6d936d83c9       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   14603a5b9b940       kindnet-z4d6c
	5265cfc6ac2fc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   ab897e6a5ff60       kube-scheduler-multinode-396884
	4e51c0a262ad8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   6c45ca539be4d       kube-apiserver-multinode-396884
	ff3fe0a199c09       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   a4cd5972443f1       kube-controller-manager-multinode-396884
	65a07f4361254       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   5e42fc4c6013d       etcd-multinode-396884
	c8338e88fc1b0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   6ed76286a8696       busybox-7dff88458-pzr7k
	7b20bcea57368       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   cd1d8929e4d25       coredns-7c65d6cfc9-qtpcg
	7e78c4f8c735e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c4e259c738185       storage-provisioner
	e2a1dfc2e08a6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   d9d680aef76b1       kindnet-z4d6c
	7b44a546c6b2a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   4576692beffea       kube-proxy-qmlbf
	5390064e87e60       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   5175a3a2c4a6c       kube-scheduler-multinode-396884
	b335b9702caa3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   4a30deaeaaf32       kube-controller-manager-multinode-396884
	0bd11dfe3a3f4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   9d3c73752580a       kube-apiserver-multinode-396884
	6ea4b28b7bae4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   a383e42333e08       etcd-multinode-396884
	
	
	==> coredns [7b20bcea57368d20449e198aad2f67b5a322fe3fd9193bb91ab19d01c689bc8c] <==
	[INFO] 10.244.0.3:49362 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001722842s
	[INFO] 10.244.0.3:54240 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091983s
	[INFO] 10.244.0.3:34820 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000194229s
	[INFO] 10.244.0.3:35562 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001093551s
	[INFO] 10.244.0.3:47979 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005873s
	[INFO] 10.244.0.3:48379 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055018s
	[INFO] 10.244.0.3:40851 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060956s
	[INFO] 10.244.1.2:52098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137307s
	[INFO] 10.244.1.2:41742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129006s
	[INFO] 10.244.1.2:58999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094199s
	[INFO] 10.244.1.2:53402 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092946s
	[INFO] 10.244.0.3:38808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100114s
	[INFO] 10.244.0.3:54241 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059415s
	[INFO] 10.244.0.3:45999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042417s
	[INFO] 10.244.0.3:48989 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041207s
	[INFO] 10.244.1.2:43578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161824s
	[INFO] 10.244.1.2:59633 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000545807s
	[INFO] 10.244.1.2:39252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144119s
	[INFO] 10.244.1.2:43373 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122601s
	[INFO] 10.244.0.3:46966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121584s
	[INFO] 10.244.0.3:59127 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094605s
	[INFO] 10.244.0.3:55764 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077945s
	[INFO] 10.244.0.3:39350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065883s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2fc74db4fa9be89515943bde72512233a0dd0bd6c64a5fcbbe758b9e1cf5a1b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40195 - 37690 "HINFO IN 7330464082475971152.6795589113969959812. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012338521s
	
	
	==> describe nodes <==
	Name:               multinode-396884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-396884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=multinode-396884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_33_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:33:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-396884
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:43:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:39:47 +0000   Sat, 14 Sep 2024 17:33:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    multinode-396884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbec24e7e0254179ac61d32d838545fa
	  System UUID:                dbec24e7-e025-4179-ac61-d32d838545fa
	  Boot ID:                    b3ec561f-d0ed-473c-918d-183c27fdcf35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pzr7k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kube-system                 coredns-7c65d6cfc9-qtpcg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-396884                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-z4d6c                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-396884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-396884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-qmlbf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-396884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-396884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-396884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-396884 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-396884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-396884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-396884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-396884 event: Registered Node multinode-396884 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-396884 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-396884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-396884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-396884 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node multinode-396884 event: Registered Node multinode-396884 in Controller
	
	
	Name:               multinode-396884-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-396884-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=multinode-396884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T17_40_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:40:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-396884-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:41:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:42:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:42:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:42:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 17:40:58 +0000   Sat, 14 Sep 2024 17:42:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    multinode-396884-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7181e09ea84e4f039062f92a925a5288
	  System UUID:                7181e09e-a84e-4f03-9062-f92a925a5288
	  Boot ID:                    993e31f9-559d-46fe-89d9-daad60598e95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xptfw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kindnet-gtn5l              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m55s
	  kube-system                 kube-proxy-gs2rm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m55s (x2 over 9m55s)  kubelet          Node multinode-396884-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m55s (x2 over 9m55s)  kubelet          Node multinode-396884-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m55s (x2 over 9m55s)  kubelet          Node multinode-396884-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m35s                  kubelet          Node multinode-396884-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-396884-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-396884-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-396884-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-396884-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-396884-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.273986] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.124948] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.193996] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.132634] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.280269] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.887713] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[Sep14 17:33] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.063629] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.504517] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.082092] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.638361] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.930725] kauditd_printk_skb: 49 callbacks suppressed
	[ +11.769652] kauditd_printk_skb: 38 callbacks suppressed
	[Sep14 17:34] kauditd_printk_skb: 14 callbacks suppressed
	[Sep14 17:39] systemd-fstab-generator[2605]: Ignoring "noauto" option for root device
	[  +0.166264] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.193889] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.151167] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.286690] systemd-fstab-generator[2675]: Ignoring "noauto" option for root device
	[  +0.697946] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +2.237243] systemd-fstab-generator[3176]: Ignoring "noauto" option for root device
	[  +4.713162] kauditd_printk_skb: 204 callbacks suppressed
	[  +7.956382] kauditd_printk_skb: 14 callbacks suppressed
	[Sep14 17:40] systemd-fstab-generator[3747]: Ignoring "noauto" option for root device
	[ +13.522034] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [65a07f4361254a650dec89e9a12b0e50e904390a219fa291f722cf3e0bce0d18] <==
	{"level":"info","ts":"2024-09-14T17:39:44.842937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e4e52c0b9ecc5e15","local-member-id":"f9de38f1a7e06692","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:39:44.844294Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:39:44.849070Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:39:44.855885Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T17:39:44.855941Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:39:44.858447Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:39:44.859656Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f9de38f1a7e06692","initial-advertise-peer-urls":["https://192.168.39.202:2380"],"listen-peer-urls":["https://192.168.39.202:2380"],"advertise-client-urls":["https://192.168.39.202:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.202:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T17:39:44.859784Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T17:39:45.896733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T17:39:45.896817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T17:39:45.896835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 received MsgPreVoteResp from f9de38f1a7e06692 at term 2"}
	{"level":"info","ts":"2024-09-14T17:39:45.896846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.896853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 received MsgVoteResp from f9de38f1a7e06692 at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.896862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f9de38f1a7e06692 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.896870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f9de38f1a7e06692 elected leader f9de38f1a7e06692 at term 3"}
	{"level":"info","ts":"2024-09-14T17:39:45.902266Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f9de38f1a7e06692","local-member-attributes":"{Name:multinode-396884 ClientURLs:[https://192.168.39.202:2379]}","request-path":"/0/members/f9de38f1a7e06692/attributes","cluster-id":"e4e52c0b9ecc5e15","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:39:45.902318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:39:45.902665Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:39:45.902779Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:39:45.902778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:39:45.903447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:39:45.903623Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:39:45.904202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T17:39:45.904453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.202:2379"}
	{"level":"info","ts":"2024-09-14T17:41:10.712937Z","caller":"traceutil/trace.go:171","msg":"trace[1755647145] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"102.451879ms","start":"2024-09-14T17:41:10.610456Z","end":"2024-09-14T17:41:10.712907Z","steps":["trace[1755647145] 'process raft request'  (duration: 102.326938ms)"],"step_count":1}
	
	
	==> etcd [6ea4b28b7bae4cd4bd3e7b2ca7e3c8310f9e12b908437db9f5c44b01371058b6] <==
	{"level":"info","ts":"2024-09-14T17:33:05.387740Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:33:05.388339Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:33:05.389034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.202:2379"}
	{"level":"info","ts":"2024-09-14T17:33:05.401099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:33:05.415931Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e4e52c0b9ecc5e15","local-member-id":"f9de38f1a7e06692","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:33:05.416111Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:33:05.416164Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:33:05.418322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-14T17:33:57.375112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.52673ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7391130405298998201 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-396884-m02.17f52cc06c75418b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-396884-m02.17f52cc06c75418b\" value_size:642 lease:7391130405298997200 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-14T17:33:57.375344Z","caller":"traceutil/trace.go:171","msg":"trace[1229652052] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"227.022198ms","start":"2024-09-14T17:33:57.148286Z","end":"2024-09-14T17:33:57.375308Z","steps":["trace[1229652052] 'process raft request'  (duration: 72.812832ms)","trace[1229652052] 'compare'  (duration: 153.375677ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T17:34:01.157943Z","caller":"traceutil/trace.go:171","msg":"trace[1840670995] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"101.528829ms","start":"2024-09-14T17:34:01.056138Z","end":"2024-09-14T17:34:01.157667Z","steps":["trace[1840670995] 'process raft request'  (duration: 101.394036ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:34:54.789491Z","caller":"traceutil/trace.go:171","msg":"trace[1358092839] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"223.163326ms","start":"2024-09-14T17:34:54.566294Z","end":"2024-09-14T17:34:54.789457Z","steps":["trace[1358092839] 'process raft request'  (duration: 222.74671ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:34:57.843843Z","caller":"traceutil/trace.go:171","msg":"trace[1278725833] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"141.276781ms","start":"2024-09-14T17:34:57.702526Z","end":"2024-09-14T17:34:57.843803Z","steps":["trace[1278725833] 'process raft request'  (duration: 141.160755ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:34:58.029754Z","caller":"traceutil/trace.go:171","msg":"trace[557321825] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"180.692568ms","start":"2024-09-14T17:34:57.849043Z","end":"2024-09-14T17:34:58.029735Z","steps":["trace[557321825] 'process raft request'  (duration: 115.31876ms)","trace[557321825] 'compare'  (duration: 65.249533ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T17:34:58.382886Z","caller":"traceutil/trace.go:171","msg":"trace[1617984724] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"125.496502ms","start":"2024-09-14T17:34:58.257373Z","end":"2024-09-14T17:34:58.382869Z","steps":["trace[1617984724] 'process raft request'  (duration: 124.192601ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T17:38:08.701842Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-14T17:38:08.701974Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-396884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.202:2380"],"advertise-client-urls":["https://192.168.39.202:2379"]}
	{"level":"warn","ts":"2024-09-14T17:38:08.704635Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:38:08.704750Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:38:08.781743Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.202:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-14T17:38:08.781893Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.202:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-14T17:38:08.783583Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f9de38f1a7e06692","current-leader-member-id":"f9de38f1a7e06692"}
	{"level":"info","ts":"2024-09-14T17:38:08.786487Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:38:08.786605Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.202:2380"}
	{"level":"info","ts":"2024-09-14T17:38:08.786628Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-396884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.202:2380"],"advertise-client-urls":["https://192.168.39.202:2379"]}
	
	
	==> kernel <==
	 17:43:52 up 11 min,  0 users,  load average: 0.15, 0.21, 0.12
	Linux multinode-396884 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e2a1dfc2e08a6e79deb13754c9100ee0b892c3bf6688372c74501fc674c759b3] <==
	I0914 17:37:26.538291       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:37:36.537415       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:37:36.537464       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:37:36.537659       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:37:36.537680       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:37:36.537738       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:37:36.537755       1 main.go:299] handling current node
	I0914 17:37:46.539750       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:37:46.539857       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:37:46.539996       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:37:46.540019       1 main.go:299] handling current node
	I0914 17:37:46.540048       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:37:46.540065       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:37:56.534364       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:37:56.534485       1 main.go:299] handling current node
	I0914 17:37:56.534516       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:37:56.534540       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:37:56.534683       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:37:56.534710       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	I0914 17:38:06.536399       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:38:06.536551       1 main.go:299] handling current node
	I0914 17:38:06.536583       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:38:06.536602       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:38:06.536741       1 main.go:295] Handling node with IPs: map[192.168.39.207:{}]
	I0914 17:38:06.536777       1 main.go:322] Node multinode-396884-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fef6d936d83c959c7a8ea9056fd8ae85068791472148efab33b4c05016da159a] <==
	I0914 17:42:49.247519       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:42:59.246712       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:42:59.246882       1 main.go:299] handling current node
	I0914 17:42:59.246912       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:42:59.246918       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:43:09.255299       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:43:09.255412       1 main.go:299] handling current node
	I0914 17:43:09.255445       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:43:09.255463       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:43:19.250631       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:43:19.250676       1 main.go:299] handling current node
	I0914 17:43:19.250692       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:43:19.250697       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:43:29.255362       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:43:29.255449       1 main.go:299] handling current node
	I0914 17:43:29.255481       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:43:29.255489       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:43:39.252386       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:43:39.252499       1 main.go:299] handling current node
	I0914 17:43:39.252528       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:43:39.252546       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:43:49.247152       1 main.go:295] Handling node with IPs: map[192.168.39.97:{}]
	I0914 17:43:49.247307       1 main.go:322] Node multinode-396884-m02 has CIDR [10.244.1.0/24] 
	I0914 17:43:49.247433       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0914 17:43:49.247457       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0bd11dfe3a3f4013197bc7de619061491c70c793a47432b085d927da7165d49f] <==
	W0914 17:38:08.732799       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.732859       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.732923       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733001       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733049       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733092       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733122       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733173       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733279       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733333       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733383       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733339       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733454       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733509       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733577       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733583       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733646       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733181       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733285       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.732927       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733768       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733434       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733826       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733627       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 17:38:08.733883       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [4e51c0a262ad82470037500c6a30af75482164a1c2b2eb61692fcbcf077a5307] <==
	I0914 17:39:47.209106       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 17:39:47.209296       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 17:39:47.209333       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 17:39:47.209350       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 17:39:47.209560       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 17:39:47.210014       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 17:39:47.210615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 17:39:47.214521       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 17:39:47.214591       1 policy_source.go:224] refreshing policies
	I0914 17:39:47.214813       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 17:39:47.215385       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 17:39:47.215472       1 aggregator.go:171] initial CRD sync complete...
	I0914 17:39:47.215489       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 17:39:47.215494       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 17:39:47.215499       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:39:47.228887       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0914 17:39:47.250429       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0914 17:39:48.120012       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 17:39:49.529002       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 17:39:49.657732       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 17:39:49.669954       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 17:39:49.765086       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 17:39:49.773373       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:39:50.596396       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 17:39:50.843866       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b335b9702caa3c44ed4cefa8b2277dfa02ef9b4afbdbda180ddb2ebbfc9d76df] <==
	I0914 17:35:43.943119       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-396884-m03\" does not exist"
	I0914 17:35:43.968071       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-396884-m03" podCIDRs=["10.244.3.0/24"]
	I0914 17:35:43.968276       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	E0914 17:35:43.979469       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-396884-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-396884-m03" podCIDRs=["10.244.4.0/24"]
	E0914 17:35:43.979601       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-396884-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-396884-m03"
	E0914 17:35:43.979673       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-396884-m03': failed to patch node CIDR: Node \"multinode-396884-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0914 17:35:43.979740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:43.985502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:44.009277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:44.367030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:48.911723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:35:54.190199       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:03.377390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:03.377831       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:36:03.389731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:03.863925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:43.889026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:36:43.889367       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m03"
	I0914 17:36:43.912439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:36:43.948150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.102895ms"
	I0914 17:36:43.948500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.785µs"
	I0914 17:36:48.950717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:48.980395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:36:48.992072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:36:59.069854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	
	
	==> kube-controller-manager [ff3fe0a199c09e50cadb0723f1ebc76b3a4c8700b517a6e0b02304b5b0f92b15] <==
	I0914 17:41:06.403403       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-396884-m03\" does not exist"
	I0914 17:41:06.427296       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-396884-m03" podCIDRs=["10.244.2.0/24"]
	I0914 17:41:06.427338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:06.427360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:06.758802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:07.101514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:10.723502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:16.476641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:25.542369       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:41:25.542538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:25.556105       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:25.624739       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:30.284109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:30.305818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:41:30.733956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-396884-m02"
	I0914 17:41:30.734332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m03"
	I0914 17:42:10.640597       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:42:10.662422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:42:10.686485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.777333ms"
	I0914 17:42:10.686612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.933µs"
	I0914 17:42:15.751831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-396884-m02"
	I0914 17:42:30.572089       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mhld5"
	I0914 17:42:30.592967       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mhld5"
	I0914 17:42:30.593064       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d8c78"
	I0914 17:42:30.616799       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d8c78"
	
	
	==> kube-proxy [7b44a546c6b2a4cb208db8928953e879c3fdf0af3e39bc8e4db8fde5b5d45382] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:33:15.609534       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:33:15.619500       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.202"]
	E0914 17:33:15.619672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:33:15.660609       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:33:15.660650       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:33:15.660679       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:33:15.664325       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:33:15.664673       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:33:15.664729       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:33:15.666142       1 config.go:199] "Starting service config controller"
	I0914 17:33:15.666205       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:33:15.666310       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:33:15.666328       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:33:15.667010       1 config.go:328] "Starting node config controller"
	I0914 17:33:15.667079       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:33:15.767270       1 shared_informer.go:320] Caches are synced for node config
	I0914 17:33:15.767301       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:33:15.767329       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bdfae496458ecf210d5b42e38ae9b96165a86c5bc23d9c6847c96ae81abe7f30] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 17:39:48.725629       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 17:39:48.736753       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.202"]
	E0914 17:39:48.737039       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:39:48.772674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 17:39:48.772716       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 17:39:48.772775       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:39:48.775318       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:39:48.775775       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:39:48.775830       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:39:48.777511       1 config.go:199] "Starting service config controller"
	I0914 17:39:48.777602       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:39:48.777655       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:39:48.777675       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:39:48.780063       1 config.go:328] "Starting node config controller"
	I0914 17:39:48.780120       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:39:48.877805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:39:48.877861       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:39:48.880395       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5265cfc6ac2fca92414c21f818a2645bb645bbf7a17299e25e6b0276eab7b351] <==
	I0914 17:39:45.534456       1 serving.go:386] Generated self-signed cert in-memory
	W0914 17:39:47.164331       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 17:39:47.164409       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 17:39:47.164419       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 17:39:47.164431       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 17:39:47.233922       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 17:39:47.233969       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:39:47.238974       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 17:39:47.239480       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 17:39:47.239596       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 17:39:47.239687       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 17:39:47.340116       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5390064e87e606423a1b9d0f32f83c0a7715e0d58d537080aca871e78d0c814b] <==
	E0914 17:33:07.091612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.923367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 17:33:07.923401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.926897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 17:33:07.926987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.933237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 17:33:07.933362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:07.959530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 17:33:07.959673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.025378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 17:33:08.025499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.072543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 17:33:08.072736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.076436       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 17:33:08.076557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.117714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 17:33:08.117876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.281413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 17:33:08.281504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.357339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 17:33:08.357437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:33:08.395337       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 17:33:08.395478       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 17:33:11.387316       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 17:38:08.701651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 14 17:42:34 multinode-396884 kubelet[3183]: E0914 17:42:34.098921    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335754098271350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:42:44 multinode-396884 kubelet[3183]: E0914 17:42:44.027981    3183 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:42:44 multinode-396884 kubelet[3183]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:42:44 multinode-396884 kubelet[3183]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:42:44 multinode-396884 kubelet[3183]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:42:44 multinode-396884 kubelet[3183]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:42:44 multinode-396884 kubelet[3183]: E0914 17:42:44.101011    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335764100581694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:42:44 multinode-396884 kubelet[3183]: E0914 17:42:44.101036    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335764100581694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:42:54 multinode-396884 kubelet[3183]: E0914 17:42:54.103518    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335774102036529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:42:54 multinode-396884 kubelet[3183]: E0914 17:42:54.103559    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335774102036529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:04 multinode-396884 kubelet[3183]: E0914 17:43:04.105476    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335784104975132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:04 multinode-396884 kubelet[3183]: E0914 17:43:04.105755    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335784104975132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:14 multinode-396884 kubelet[3183]: E0914 17:43:14.108049    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335794107662665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:14 multinode-396884 kubelet[3183]: E0914 17:43:14.108456    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335794107662665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:24 multinode-396884 kubelet[3183]: E0914 17:43:24.110366    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335804109994498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:24 multinode-396884 kubelet[3183]: E0914 17:43:24.110397    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335804109994498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:34 multinode-396884 kubelet[3183]: E0914 17:43:34.112815    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335814112317090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:34 multinode-396884 kubelet[3183]: E0914 17:43:34.112864    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335814112317090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:44 multinode-396884 kubelet[3183]: E0914 17:43:44.027783    3183 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 17:43:44 multinode-396884 kubelet[3183]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 17:43:44 multinode-396884 kubelet[3183]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 17:43:44 multinode-396884 kubelet[3183]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 17:43:44 multinode-396884 kubelet[3183]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 17:43:44 multinode-396884 kubelet[3183]: E0914 17:43:44.115980    3183 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335824114547310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 17:43:44 multinode-396884 kubelet[3183]: E0914 17:43:44.116060    3183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726335824114547310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:43:51.657430   47666 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19643-8806/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-396884 -n multinode-396884
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-396884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                    
x
+
TestPreload (211.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-829285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0914 17:49:04.947931   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-829285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.775146265s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-829285 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-829285 image pull gcr.io/k8s-minikube/busybox: (4.065388274s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-829285
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-829285: (6.586896567s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-829285 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-829285 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.995802513s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-829285 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-14 17:51:26.987691307 +0000 UTC m=+4061.509425293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-829285 -n test-preload-829285
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-829285 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-829285 logs -n 25: (1.015027133s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884 sudo cat                                       | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m03_multinode-396884.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt                       | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m02:/home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n                                                                 | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | multinode-396884-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-396884 ssh -n multinode-396884-m02 sudo cat                                   | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-396884 node stop m03                                                          | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:35 UTC |
	| node    | multinode-396884 node start                                                             | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:35 UTC | 14 Sep 24 17:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	| stop    | -p multinode-396884                                                                     | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	| start   | -p multinode-396884                                                                     | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:38 UTC | 14 Sep 24 17:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC |                     |
	| node    | multinode-396884 node delete                                                            | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC | 14 Sep 24 17:41 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-396884 stop                                                                   | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:41 UTC |                     |
	| start   | -p multinode-396884                                                                     | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:43 UTC | 14 Sep 24 17:47 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-396884                                                                | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC |                     |
	| start   | -p multinode-396884-m02                                                                 | multinode-396884-m02 | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-396884-m03                                                                 | multinode-396884-m03 | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC | 14 Sep 24 17:47 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-396884                                                                 | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC |                     |
	| delete  | -p multinode-396884-m03                                                                 | multinode-396884-m03 | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC | 14 Sep 24 17:47 UTC |
	| delete  | -p multinode-396884                                                                     | multinode-396884     | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC | 14 Sep 24 17:47 UTC |
	| start   | -p test-preload-829285                                                                  | test-preload-829285  | jenkins | v1.34.0 | 14 Sep 24 17:47 UTC | 14 Sep 24 17:50 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-829285 image pull                                                          | test-preload-829285  | jenkins | v1.34.0 | 14 Sep 24 17:50 UTC | 14 Sep 24 17:50 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-829285                                                                  | test-preload-829285  | jenkins | v1.34.0 | 14 Sep 24 17:50 UTC | 14 Sep 24 17:50 UTC |
	| start   | -p test-preload-829285                                                                  | test-preload-829285  | jenkins | v1.34.0 | 14 Sep 24 17:50 UTC | 14 Sep 24 17:51 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-829285 image list                                                          | test-preload-829285  | jenkins | v1.34.0 | 14 Sep 24 17:51 UTC | 14 Sep 24 17:51 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:50:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:50:18.820040   50262 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:50:18.820267   50262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:50:18.820275   50262 out.go:358] Setting ErrFile to fd 2...
	I0914 17:50:18.820280   50262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:50:18.820467   50262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:50:18.820961   50262 out.go:352] Setting JSON to false
	I0914 17:50:18.821833   50262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5563,"bootTime":1726330656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:50:18.821924   50262 start.go:139] virtualization: kvm guest
	I0914 17:50:18.824166   50262 out.go:177] * [test-preload-829285] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:50:18.825648   50262 notify.go:220] Checking for updates...
	I0914 17:50:18.825671   50262 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:50:18.827249   50262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:50:18.828742   50262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:50:18.830072   50262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:50:18.831650   50262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:50:18.832991   50262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:50:18.834751   50262 config.go:182] Loaded profile config "test-preload-829285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0914 17:50:18.835122   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:50:18.835164   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:50:18.849565   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0914 17:50:18.849950   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:50:18.850583   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:50:18.850605   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:50:18.850959   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:50:18.851177   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:18.852921   50262 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 17:50:18.854018   50262 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:50:18.854335   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:50:18.854377   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:50:18.869012   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0914 17:50:18.869612   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:50:18.870072   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:50:18.870095   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:50:18.870459   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:50:18.870631   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:18.905534   50262 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 17:50:18.906731   50262 start.go:297] selected driver: kvm2
	I0914 17:50:18.906750   50262 start.go:901] validating driver "kvm2" against &{Name:test-preload-829285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-829285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:50:18.906860   50262 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:50:18.907612   50262 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:50:18.907690   50262 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:50:18.922307   50262 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:50:18.922663   50262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:50:18.922700   50262 cni.go:84] Creating CNI manager for ""
	I0914 17:50:18.922745   50262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:50:18.922802   50262 start.go:340] cluster config:
	{Name:test-preload-829285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-829285 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:50:18.922923   50262 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:50:18.924642   50262 out.go:177] * Starting "test-preload-829285" primary control-plane node in "test-preload-829285" cluster
	I0914 17:50:18.925691   50262 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0914 17:50:19.026753   50262 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0914 17:50:19.026787   50262 cache.go:56] Caching tarball of preloaded images
	I0914 17:50:19.026947   50262 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0914 17:50:19.028929   50262 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0914 17:50:19.030429   50262 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0914 17:50:19.131940   50262 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0914 17:50:30.903450   50262 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0914 17:50:30.903555   50262 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0914 17:50:31.746523   50262 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0914 17:50:31.746674   50262 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/config.json ...
	I0914 17:50:31.746922   50262 start.go:360] acquireMachinesLock for test-preload-829285: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:50:31.746997   50262 start.go:364] duration metric: took 51.812µs to acquireMachinesLock for "test-preload-829285"
	I0914 17:50:31.747017   50262 start.go:96] Skipping create...Using existing machine configuration
	I0914 17:50:31.747025   50262 fix.go:54] fixHost starting: 
	I0914 17:50:31.747317   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:50:31.747360   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:50:31.762011   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0914 17:50:31.762554   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:50:31.762996   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:50:31.763018   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:50:31.763354   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:50:31.763495   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:31.763634   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetState
	I0914 17:50:31.765230   50262 fix.go:112] recreateIfNeeded on test-preload-829285: state=Stopped err=<nil>
	I0914 17:50:31.765254   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	W0914 17:50:31.765397   50262 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 17:50:31.767854   50262 out.go:177] * Restarting existing kvm2 VM for "test-preload-829285" ...
	I0914 17:50:31.769219   50262 main.go:141] libmachine: (test-preload-829285) Calling .Start
	I0914 17:50:31.769405   50262 main.go:141] libmachine: (test-preload-829285) Ensuring networks are active...
	I0914 17:50:31.770225   50262 main.go:141] libmachine: (test-preload-829285) Ensuring network default is active
	I0914 17:50:31.770639   50262 main.go:141] libmachine: (test-preload-829285) Ensuring network mk-test-preload-829285 is active
	I0914 17:50:31.770994   50262 main.go:141] libmachine: (test-preload-829285) Getting domain xml...
	I0914 17:50:31.771982   50262 main.go:141] libmachine: (test-preload-829285) Creating domain...
	I0914 17:50:32.986506   50262 main.go:141] libmachine: (test-preload-829285) Waiting to get IP...
	I0914 17:50:32.987276   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:32.987762   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:32.987827   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:32.987721   50346 retry.go:31] will retry after 297.920954ms: waiting for machine to come up
	I0914 17:50:33.287354   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:33.287974   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:33.288000   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:33.287900   50346 retry.go:31] will retry after 344.221901ms: waiting for machine to come up
	I0914 17:50:33.634295   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:33.634714   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:33.634739   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:33.634679   50346 retry.go:31] will retry after 299.449964ms: waiting for machine to come up
	I0914 17:50:33.936143   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:33.936487   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:33.936511   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:33.936455   50346 retry.go:31] will retry after 367.767676ms: waiting for machine to come up
	I0914 17:50:34.306064   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:34.306536   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:34.306565   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:34.306483   50346 retry.go:31] will retry after 721.844787ms: waiting for machine to come up
	I0914 17:50:35.030630   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:35.031001   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:35.031075   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:35.030938   50346 retry.go:31] will retry after 773.626722ms: waiting for machine to come up
	I0914 17:50:35.806088   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:35.806570   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:35.806612   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:35.806475   50346 retry.go:31] will retry after 1.01122214s: waiting for machine to come up
	I0914 17:50:36.819334   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:36.819728   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:36.819786   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:36.819701   50346 retry.go:31] will retry after 1.285236777s: waiting for machine to come up
	I0914 17:50:38.107445   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:38.107955   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:38.107983   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:38.107904   50346 retry.go:31] will retry after 1.143031239s: waiting for machine to come up
	I0914 17:50:39.253631   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:39.254106   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:39.254130   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:39.254078   50346 retry.go:31] will retry after 2.17821994s: waiting for machine to come up
	I0914 17:50:41.435702   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:41.436125   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:41.436169   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:41.436096   50346 retry.go:31] will retry after 1.965627026s: waiting for machine to come up
	I0914 17:50:43.403846   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:43.404299   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:43.404320   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:43.404257   50346 retry.go:31] will retry after 2.728236722s: waiting for machine to come up
	I0914 17:50:46.136029   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:46.136348   50262 main.go:141] libmachine: (test-preload-829285) DBG | unable to find current IP address of domain test-preload-829285 in network mk-test-preload-829285
	I0914 17:50:46.136378   50262 main.go:141] libmachine: (test-preload-829285) DBG | I0914 17:50:46.136297   50346 retry.go:31] will retry after 4.4738774s: waiting for machine to come up
	I0914 17:50:50.614340   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.614822   50262 main.go:141] libmachine: (test-preload-829285) Found IP for machine: 192.168.39.71
	I0914 17:50:50.614854   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has current primary IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.614864   50262 main.go:141] libmachine: (test-preload-829285) Reserving static IP address...
	I0914 17:50:50.615274   50262 main.go:141] libmachine: (test-preload-829285) Reserved static IP address: 192.168.39.71
	I0914 17:50:50.615294   50262 main.go:141] libmachine: (test-preload-829285) Waiting for SSH to be available...
	I0914 17:50:50.615317   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "test-preload-829285", mac: "52:54:00:68:24:73", ip: "192.168.39.71"} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:50.615341   50262 main.go:141] libmachine: (test-preload-829285) DBG | skip adding static IP to network mk-test-preload-829285 - found existing host DHCP lease matching {name: "test-preload-829285", mac: "52:54:00:68:24:73", ip: "192.168.39.71"}
	I0914 17:50:50.615354   50262 main.go:141] libmachine: (test-preload-829285) DBG | Getting to WaitForSSH function...
	I0914 17:50:50.617780   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.618088   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:50.618110   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.618275   50262 main.go:141] libmachine: (test-preload-829285) DBG | Using SSH client type: external
	I0914 17:50:50.618293   50262 main.go:141] libmachine: (test-preload-829285) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa (-rw-------)
	I0914 17:50:50.618331   50262 main.go:141] libmachine: (test-preload-829285) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:50:50.618346   50262 main.go:141] libmachine: (test-preload-829285) DBG | About to run SSH command:
	I0914 17:50:50.618360   50262 main.go:141] libmachine: (test-preload-829285) DBG | exit 0
	I0914 17:50:50.737998   50262 main.go:141] libmachine: (test-preload-829285) DBG | SSH cmd err, output: <nil>: 
	I0914 17:50:50.738417   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetConfigRaw
	I0914 17:50:50.738980   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetIP
	I0914 17:50:50.741663   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.741941   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:50.741964   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.742207   50262 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/config.json ...
	I0914 17:50:50.742405   50262 machine.go:93] provisionDockerMachine start ...
	I0914 17:50:50.742424   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:50.742612   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:50.744725   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.745036   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:50.745057   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.745300   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:50.745476   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:50.745683   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:50.745836   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:50.746087   50262 main.go:141] libmachine: Using SSH client type: native
	I0914 17:50:50.746329   50262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0914 17:50:50.746349   50262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 17:50:50.842245   50262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 17:50:50.842276   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetMachineName
	I0914 17:50:50.842572   50262 buildroot.go:166] provisioning hostname "test-preload-829285"
	I0914 17:50:50.842592   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetMachineName
	I0914 17:50:50.842858   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:50.845454   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.845752   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:50.845785   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.845907   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:50.846061   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:50.846190   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:50.846296   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:50.846430   50262 main.go:141] libmachine: Using SSH client type: native
	I0914 17:50:50.846717   50262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0914 17:50:50.846736   50262 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-829285 && echo "test-preload-829285" | sudo tee /etc/hostname
	I0914 17:50:50.955580   50262 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-829285
	
	I0914 17:50:50.955609   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:50.958644   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.959068   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:50.959099   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:50.959303   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:50.959501   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:50.959661   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:50.959799   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:50.959943   50262 main.go:141] libmachine: Using SSH client type: native
	I0914 17:50:50.960112   50262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0914 17:50:50.960127   50262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-829285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-829285/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-829285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:50:51.062705   50262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:50:51.062740   50262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:50:51.062790   50262 buildroot.go:174] setting up certificates
	I0914 17:50:51.062802   50262 provision.go:84] configureAuth start
	I0914 17:50:51.062811   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetMachineName
	I0914 17:50:51.063143   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetIP
	I0914 17:50:51.066352   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.066666   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.066690   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.066866   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.069173   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.069624   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.069656   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.069721   50262 provision.go:143] copyHostCerts
	I0914 17:50:51.069782   50262 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:50:51.069792   50262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:50:51.069869   50262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:50:51.069990   50262 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:50:51.070001   50262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:50:51.070030   50262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:50:51.070092   50262 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:50:51.070100   50262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:50:51.070126   50262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:50:51.070215   50262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.test-preload-829285 san=[127.0.0.1 192.168.39.71 localhost minikube test-preload-829285]
	I0914 17:50:51.160804   50262 provision.go:177] copyRemoteCerts
	I0914 17:50:51.160872   50262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:50:51.160897   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.163648   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.164085   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.164110   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.164353   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:51.164509   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.164666   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:51.164781   50262 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa Username:docker}
	I0914 17:50:51.243472   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:50:51.265709   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 17:50:51.288594   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 17:50:51.310729   50262 provision.go:87] duration metric: took 247.916289ms to configureAuth
	I0914 17:50:51.310756   50262 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:50:51.310948   50262 config.go:182] Loaded profile config "test-preload-829285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0914 17:50:51.311038   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.313704   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.314023   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.314067   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.314240   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:51.314387   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.314526   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.314688   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:51.314842   50262 main.go:141] libmachine: Using SSH client type: native
	I0914 17:50:51.315001   50262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0914 17:50:51.315016   50262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:50:51.520905   50262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:50:51.520926   50262 machine.go:96] duration metric: took 778.508516ms to provisionDockerMachine
	I0914 17:50:51.520938   50262 start.go:293] postStartSetup for "test-preload-829285" (driver="kvm2")
	I0914 17:50:51.520947   50262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:50:51.520961   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:51.521253   50262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:50:51.521277   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.523857   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.524206   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.524226   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.524347   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:51.524525   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.524714   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:51.524878   50262 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa Username:docker}
	I0914 17:50:51.604713   50262 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:50:51.608588   50262 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:50:51.608612   50262 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:50:51.608683   50262 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:50:51.608790   50262 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:50:51.608905   50262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:50:51.617571   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:50:51.640484   50262 start.go:296] duration metric: took 119.530832ms for postStartSetup
	I0914 17:50:51.640525   50262 fix.go:56] duration metric: took 19.893499208s for fixHost
	I0914 17:50:51.640546   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.643408   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.643728   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.643753   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.643971   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:51.644207   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.644404   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.644511   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:51.644663   50262 main.go:141] libmachine: Using SSH client type: native
	I0914 17:50:51.644856   50262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0914 17:50:51.644868   50262 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:50:51.742750   50262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726336251.718124732
	
	I0914 17:50:51.742771   50262 fix.go:216] guest clock: 1726336251.718124732
	I0914 17:50:51.742779   50262 fix.go:229] Guest: 2024-09-14 17:50:51.718124732 +0000 UTC Remote: 2024-09-14 17:50:51.64052915 +0000 UTC m=+32.853372953 (delta=77.595582ms)
	I0914 17:50:51.742815   50262 fix.go:200] guest clock delta is within tolerance: 77.595582ms
	I0914 17:50:51.742825   50262 start.go:83] releasing machines lock for "test-preload-829285", held for 19.995809621s
	I0914 17:50:51.742847   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:51.743119   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetIP
	I0914 17:50:51.745561   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.745922   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.745940   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.746105   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:51.746534   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:51.746709   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:50:51.746799   50262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:50:51.746843   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.746893   50262 ssh_runner.go:195] Run: cat /version.json
	I0914 17:50:51.746916   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:50:51.749476   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.749759   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.749852   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.749874   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.749999   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:51.750129   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:51.750146   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.750176   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:51.750371   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:51.750379   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:50:51.750523   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:50:51.750531   50262 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa Username:docker}
	I0914 17:50:51.750638   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:50:51.750764   50262 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa Username:docker}
	I0914 17:50:51.852441   50262 ssh_runner.go:195] Run: systemctl --version
	I0914 17:50:51.857890   50262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:50:51.999421   50262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:50:52.004608   50262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:50:52.004679   50262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:50:52.019595   50262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:50:52.019618   50262 start.go:495] detecting cgroup driver to use...
	I0914 17:50:52.019685   50262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:50:52.035856   50262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:50:52.049068   50262 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:50:52.049140   50262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:50:52.062456   50262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:50:52.075449   50262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:50:52.195006   50262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:50:52.375307   50262 docker.go:233] disabling docker service ...
	I0914 17:50:52.375389   50262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:50:52.389030   50262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:50:52.402205   50262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:50:52.508688   50262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:50:52.610971   50262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:50:52.624855   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:50:52.642735   50262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0914 17:50:52.642794   50262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.653144   50262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:50:52.653218   50262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.663545   50262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.673793   50262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.684047   50262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:50:52.694249   50262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.703963   50262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.719932   50262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:50:52.729869   50262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:50:52.745679   50262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:50:52.745754   50262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:50:52.757434   50262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:50:52.766662   50262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:50:52.868919   50262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:50:52.954759   50262 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:50:52.954839   50262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:50:52.959483   50262 start.go:563] Will wait 60s for crictl version
	I0914 17:50:52.959541   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:52.963188   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:50:52.998183   50262 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:50:52.998266   50262 ssh_runner.go:195] Run: crio --version
	I0914 17:50:53.025009   50262 ssh_runner.go:195] Run: crio --version
	I0914 17:50:53.056985   50262 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0914 17:50:53.058173   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetIP
	I0914 17:50:53.061184   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:53.061681   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:50:53.061717   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:50:53.061963   50262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 17:50:53.066072   50262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:50:53.079755   50262 kubeadm.go:883] updating cluster {Name:test-preload-829285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-829285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:50:53.079857   50262 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0914 17:50:53.079897   50262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:50:53.121914   50262 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0914 17:50:53.121978   50262 ssh_runner.go:195] Run: which lz4
	I0914 17:50:53.125661   50262 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 17:50:53.129489   50262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 17:50:53.129522   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0914 17:50:54.537371   50262 crio.go:462] duration metric: took 1.411732917s to copy over tarball
	I0914 17:50:54.537438   50262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 17:50:56.916075   50262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.378605967s)
	I0914 17:50:56.916109   50262 crio.go:469] duration metric: took 2.378712224s to extract the tarball
	I0914 17:50:56.916116   50262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 17:50:56.957791   50262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:50:56.998055   50262 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0914 17:50:56.998077   50262 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 17:50:56.998186   50262 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:56.998212   50262 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0914 17:50:56.998222   50262 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:56.998245   50262 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:56.998240   50262 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:56.998267   50262 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:56.998243   50262 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:56.998143   50262 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:50:56.999746   50262 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:56.999753   50262 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:56.999767   50262 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:56.999750   50262 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:50:56.999749   50262 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0914 17:50:56.999748   50262 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:56.999797   50262 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:56.999757   50262 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:57.219113   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:57.228946   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0914 17:50:57.234257   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:57.239007   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:57.241649   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:57.254089   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:57.286982   50262 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0914 17:50:57.287028   50262 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:57.287098   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.294090   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:57.363096   50262 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0914 17:50:57.363141   50262 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0914 17:50:57.363250   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.375929   50262 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0914 17:50:57.375977   50262 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:57.376024   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.377043   50262 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0914 17:50:57.377082   50262 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:57.377104   50262 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0914 17:50:57.377141   50262 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:57.377188   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.377116   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.399083   50262 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0914 17:50:57.399125   50262 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:57.399181   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.399184   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:57.413599   50262 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0914 17:50:57.413650   50262 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:57.413703   50262 ssh_runner.go:195] Run: which crictl
	I0914 17:50:57.413728   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0914 17:50:57.413782   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:57.413809   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:57.413849   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:57.413873   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:57.474632   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:57.549395   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:57.549418   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:57.549438   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:57.549532   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0914 17:50:57.549638   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:57.549706   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:57.563446   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0914 17:50:57.703391   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0914 17:50:57.703403   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0914 17:50:57.722515   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:57.722590   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0914 17:50:57.722655   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0914 17:50:57.722731   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0914 17:50:57.722767   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0914 17:50:57.722856   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0914 17:50:57.792956   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0914 17:50:57.793060   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0914 17:50:57.793073   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0914 17:50:57.793145   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0914 17:50:57.834380   50262 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0914 17:50:57.834443   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0914 17:50:57.834525   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0914 17:50:57.838327   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0914 17:50:57.838356   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0914 17:50:57.838366   50262 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0914 17:50:57.838409   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0914 17:50:57.838418   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0914 17:50:57.838426   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0914 17:50:57.838463   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0914 17:50:57.838576   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0914 17:50:57.838657   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0914 17:50:57.885792   50262 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0914 17:50:57.885833   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0914 17:50:57.885884   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0914 17:50:57.885941   50262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0914 17:50:58.208489   50262 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:51:00.512272   50262 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.673587413s)
	I0914 17:51:00.512318   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0914 17:51:00.512321   50262 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.673885678s)
	I0914 17:51:00.512334   50262 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.626371666s)
	I0914 17:51:00.512341   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0914 17:51:00.512356   50262 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0914 17:51:00.512359   50262 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.30384325s)
	I0914 17:51:00.512363   50262 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0914 17:51:00.512416   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0914 17:51:01.257242   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0914 17:51:01.257280   50262 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0914 17:51:01.257326   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0914 17:51:02.007723   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0914 17:51:02.007756   50262 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0914 17:51:02.007808   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0914 17:51:04.154468   50262 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.146635633s)
	I0914 17:51:04.154515   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0914 17:51:04.154534   50262 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0914 17:51:04.154587   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0914 17:51:04.597543   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0914 17:51:04.597571   50262 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0914 17:51:04.597669   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0914 17:51:04.938654   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0914 17:51:04.938687   50262 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0914 17:51:04.938745   50262 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0914 17:51:05.077694   50262 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0914 17:51:05.077744   50262 cache_images.go:123] Successfully loaded all cached images
	I0914 17:51:05.077752   50262 cache_images.go:92] duration metric: took 8.079655234s to LoadCachedImages
	I0914 17:51:05.077765   50262 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.24.4 crio true true} ...
	I0914 17:51:05.077881   50262 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-829285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-829285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:51:05.077981   50262 ssh_runner.go:195] Run: crio config
	I0914 17:51:05.127574   50262 cni.go:84] Creating CNI manager for ""
	I0914 17:51:05.127595   50262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:51:05.127606   50262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:51:05.127625   50262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-829285 NodeName:test-preload-829285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:51:05.127812   50262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-829285"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:51:05.127882   50262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0914 17:51:05.138690   50262 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:51:05.138763   50262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 17:51:05.148811   50262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 17:51:05.165381   50262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:51:05.181411   50262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0914 17:51:05.197793   50262 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0914 17:51:05.201828   50262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:51:05.214055   50262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:51:05.330720   50262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:51:05.347700   50262 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285 for IP: 192.168.39.71
	I0914 17:51:05.347729   50262 certs.go:194] generating shared ca certs ...
	I0914 17:51:05.347755   50262 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:51:05.347941   50262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:51:05.347998   50262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:51:05.348012   50262 certs.go:256] generating profile certs ...
	I0914 17:51:05.348114   50262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/client.key
	I0914 17:51:05.348173   50262 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/apiserver.key.7aad5c44
	I0914 17:51:05.348220   50262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/proxy-client.key
	I0914 17:51:05.348333   50262 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:51:05.348375   50262 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:51:05.348386   50262 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:51:05.348408   50262 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:51:05.348434   50262 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:51:05.348456   50262 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:51:05.348504   50262 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:51:05.349238   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:51:05.380500   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:51:05.427009   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:51:05.458620   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:51:05.483606   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 17:51:05.508822   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 17:51:05.539248   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:51:05.569233   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:51:05.592358   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:51:05.614735   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:51:05.638210   50262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:51:05.662508   50262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:51:05.679172   50262 ssh_runner.go:195] Run: openssl version
	I0914 17:51:05.684722   50262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:51:05.694773   50262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:51:05.698959   50262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:51:05.699010   50262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:51:05.704467   50262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:51:05.714284   50262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:51:05.724352   50262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:51:05.728492   50262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:51:05.728560   50262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:51:05.734077   50262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:51:05.744231   50262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:51:05.754447   50262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:51:05.759251   50262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:51:05.759323   50262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:51:05.765098   50262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:51:05.776537   50262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:51:05.781325   50262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 17:51:05.787272   50262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 17:51:05.793134   50262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 17:51:05.799107   50262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 17:51:05.804934   50262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 17:51:05.810974   50262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 17:51:05.816768   50262 kubeadm.go:392] StartCluster: {Name:test-preload-829285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-829285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:51:05.816849   50262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:51:05.816909   50262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:51:05.853437   50262 cri.go:89] found id: ""
	I0914 17:51:05.853504   50262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 17:51:05.863553   50262 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 17:51:05.863579   50262 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 17:51:05.863621   50262 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 17:51:05.873440   50262 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:51:05.873888   50262 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-829285" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:51:05.874012   50262 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-829285" cluster setting kubeconfig missing "test-preload-829285" context setting]
	I0914 17:51:05.874344   50262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:51:05.874929   50262 kapi.go:59] client config for test-preload-829285: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 17:51:05.875589   50262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 17:51:05.885099   50262 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.71
	I0914 17:51:05.885149   50262 kubeadm.go:1160] stopping kube-system containers ...
	I0914 17:51:05.885183   50262 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 17:51:05.885260   50262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:51:05.919758   50262 cri.go:89] found id: ""
	I0914 17:51:05.919818   50262 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 17:51:05.935683   50262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 17:51:05.946256   50262 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 17:51:05.946277   50262 kubeadm.go:157] found existing configuration files:
	
	I0914 17:51:05.946322   50262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 17:51:05.957044   50262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 17:51:05.957117   50262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 17:51:05.967362   50262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 17:51:05.976913   50262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 17:51:05.976983   50262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 17:51:05.986261   50262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 17:51:05.995132   50262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 17:51:05.995215   50262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 17:51:06.004718   50262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 17:51:06.013600   50262 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 17:51:06.013660   50262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 17:51:06.022500   50262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 17:51:06.031733   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 17:51:06.131457   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 17:51:06.742548   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 17:51:07.002804   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 17:51:07.077141   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 17:51:07.168560   50262 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:51:07.168674   50262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:51:07.668993   50262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:51:08.169063   50262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:51:08.187968   50262 api_server.go:72] duration metric: took 1.019409233s to wait for apiserver process to appear ...
	I0914 17:51:08.187994   50262 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:51:08.188017   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:08.188488   50262 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I0914 17:51:08.688118   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:12.481740   50262 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 17:51:12.481771   50262 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 17:51:12.481783   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:12.490868   50262 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 17:51:12.490899   50262 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 17:51:12.688173   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:12.693491   50262 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 17:51:12.693518   50262 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 17:51:13.189151   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:13.196694   50262 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 17:51:13.196720   50262 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 17:51:13.688243   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:13.695308   50262 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0914 17:51:13.708447   50262 api_server.go:141] control plane version: v1.24.4
	I0914 17:51:13.708484   50262 api_server.go:131] duration metric: took 5.520482506s to wait for apiserver health ...
	I0914 17:51:13.708495   50262 cni.go:84] Creating CNI manager for ""
	I0914 17:51:13.708503   50262 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:51:13.710444   50262 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 17:51:13.711514   50262 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 17:51:13.721170   50262 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 17:51:13.753463   50262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:51:13.753555   50262 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 17:51:13.753578   50262 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 17:51:13.770055   50262 system_pods.go:59] 7 kube-system pods found
	I0914 17:51:13.770092   50262 system_pods.go:61] "coredns-6d4b75cb6d-jh47k" [efccf469-107d-4db2-8fef-2d64fdaafe35] Running
	I0914 17:51:13.770098   50262 system_pods.go:61] "etcd-test-preload-829285" [d6eec8fa-f272-4ed1-b845-d4e8085c7f7e] Running
	I0914 17:51:13.770105   50262 system_pods.go:61] "kube-apiserver-test-preload-829285" [31e346b4-3246-414e-a424-6f1da20e9e30] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 17:51:13.770111   50262 system_pods.go:61] "kube-controller-manager-test-preload-829285" [46b7e24a-265a-4bd4-9e3a-60a77d1fdc07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 17:51:13.770120   50262 system_pods.go:61] "kube-proxy-szrwb" [59bb34a9-f9c7-4dd3-a490-4d8454f7d34a] Running
	I0914 17:51:13.770128   50262 system_pods.go:61] "kube-scheduler-test-preload-829285" [2737a10a-22ad-4aed-847a-ac7b09059431] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 17:51:13.770133   50262 system_pods.go:61] "storage-provisioner" [5a7bf799-59b1-47c9-87d9-ec1407b80dd6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 17:51:13.770141   50262 system_pods.go:74] duration metric: took 16.651929ms to wait for pod list to return data ...
	I0914 17:51:13.770152   50262 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:51:13.777504   50262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:51:13.777538   50262 node_conditions.go:123] node cpu capacity is 2
	I0914 17:51:13.777552   50262 node_conditions.go:105] duration metric: took 7.395645ms to run NodePressure ...
	I0914 17:51:13.777586   50262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 17:51:14.022418   50262 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 17:51:14.026902   50262 kubeadm.go:739] kubelet initialised
	I0914 17:51:14.026928   50262 kubeadm.go:740] duration metric: took 4.478182ms waiting for restarted kubelet to initialise ...
	I0914 17:51:14.026935   50262 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:51:14.031599   50262 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:14.036125   50262 pod_ready.go:98] node "test-preload-829285" hosting pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.036148   50262 pod_ready.go:82] duration metric: took 4.526778ms for pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace to be "Ready" ...
	E0914 17:51:14.036156   50262 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-829285" hosting pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.036163   50262 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:14.040087   50262 pod_ready.go:98] node "test-preload-829285" hosting pod "etcd-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.040110   50262 pod_ready.go:82] duration metric: took 3.939821ms for pod "etcd-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	E0914 17:51:14.040119   50262 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-829285" hosting pod "etcd-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.040125   50262 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:14.044114   50262 pod_ready.go:98] node "test-preload-829285" hosting pod "kube-apiserver-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.044137   50262 pod_ready.go:82] duration metric: took 4.002975ms for pod "kube-apiserver-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	E0914 17:51:14.044145   50262 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-829285" hosting pod "kube-apiserver-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.044151   50262 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:14.157075   50262 pod_ready.go:98] node "test-preload-829285" hosting pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.157104   50262 pod_ready.go:82] duration metric: took 112.94476ms for pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	E0914 17:51:14.157115   50262 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-829285" hosting pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.157121   50262 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-szrwb" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:14.557292   50262 pod_ready.go:98] node "test-preload-829285" hosting pod "kube-proxy-szrwb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.557333   50262 pod_ready.go:82] duration metric: took 400.203382ms for pod "kube-proxy-szrwb" in "kube-system" namespace to be "Ready" ...
	E0914 17:51:14.557343   50262 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-829285" hosting pod "kube-proxy-szrwb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.557350   50262 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:14.958577   50262 pod_ready.go:98] node "test-preload-829285" hosting pod "kube-scheduler-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.958605   50262 pod_ready.go:82] duration metric: took 401.24929ms for pod "kube-scheduler-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	E0914 17:51:14.958615   50262 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-829285" hosting pod "kube-scheduler-test-preload-829285" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:14.958621   50262 pod_ready.go:39] duration metric: took 931.678606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:51:14.958637   50262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 17:51:14.970934   50262 ops.go:34] apiserver oom_adj: -16
	I0914 17:51:14.970963   50262 kubeadm.go:597] duration metric: took 9.107377046s to restartPrimaryControlPlane
	I0914 17:51:14.970978   50262 kubeadm.go:394] duration metric: took 9.154217796s to StartCluster
	I0914 17:51:14.970998   50262 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:51:14.971096   50262 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:51:14.971736   50262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:51:14.972020   50262 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:51:14.972156   50262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 17:51:14.972235   50262 config.go:182] Loaded profile config "test-preload-829285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0914 17:51:14.972262   50262 addons.go:69] Setting storage-provisioner=true in profile "test-preload-829285"
	I0914 17:51:14.972284   50262 addons.go:234] Setting addon storage-provisioner=true in "test-preload-829285"
	I0914 17:51:14.972287   50262 addons.go:69] Setting default-storageclass=true in profile "test-preload-829285"
	W0914 17:51:14.972297   50262 addons.go:243] addon storage-provisioner should already be in state true
	I0914 17:51:14.972306   50262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-829285"
	I0914 17:51:14.972344   50262 host.go:66] Checking if "test-preload-829285" exists ...
	I0914 17:51:14.972629   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:51:14.972675   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:51:14.972708   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:51:14.972760   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:51:14.973744   50262 out.go:177] * Verifying Kubernetes components...
	I0914 17:51:14.975018   50262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:51:14.988639   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0914 17:51:14.988677   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0914 17:51:14.989132   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:51:14.989140   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:51:14.989636   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:51:14.989649   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:51:14.989689   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:51:14.989715   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:51:14.990010   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:51:14.990065   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:51:14.990321   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetState
	I0914 17:51:14.990536   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:51:14.990571   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:51:14.992815   50262 kapi.go:59] client config for test-preload-829285: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/client.crt", KeyFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/profiles/test-preload-829285/client.key", CAFile:"/home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 17:51:14.993154   50262 addons.go:234] Setting addon default-storageclass=true in "test-preload-829285"
	W0914 17:51:14.993175   50262 addons.go:243] addon default-storageclass should already be in state true
	I0914 17:51:14.993221   50262 host.go:66] Checking if "test-preload-829285" exists ...
	I0914 17:51:14.993629   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:51:14.993683   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:51:15.006517   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I0914 17:51:15.007134   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:51:15.007723   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:51:15.007751   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:51:15.008141   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:51:15.008358   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetState
	I0914 17:51:15.010507   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:51:15.012541   50262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:51:15.013468   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0914 17:51:15.013919   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:51:15.014132   50262 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:51:15.014146   50262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 17:51:15.014179   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:51:15.014445   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:51:15.014466   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:51:15.014927   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:51:15.015529   50262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:51:15.015578   50262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:51:15.017377   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:51:15.017925   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:51:15.017949   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:51:15.018100   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:51:15.018293   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:51:15.018447   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:51:15.018646   50262 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa Username:docker}
	I0914 17:51:15.061414   50262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0914 17:51:15.061874   50262 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:51:15.062466   50262 main.go:141] libmachine: Using API Version  1
	I0914 17:51:15.062492   50262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:51:15.062859   50262 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:51:15.063059   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetState
	I0914 17:51:15.065048   50262 main.go:141] libmachine: (test-preload-829285) Calling .DriverName
	I0914 17:51:15.065326   50262 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 17:51:15.065343   50262 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 17:51:15.065371   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHHostname
	I0914 17:51:15.068832   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:51:15.069308   50262 main.go:141] libmachine: (test-preload-829285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:24:73", ip: ""} in network mk-test-preload-829285: {Iface:virbr1 ExpiryTime:2024-09-14 18:50:42 +0000 UTC Type:0 Mac:52:54:00:68:24:73 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-829285 Clientid:01:52:54:00:68:24:73}
	I0914 17:51:15.069331   50262 main.go:141] libmachine: (test-preload-829285) DBG | domain test-preload-829285 has defined IP address 192.168.39.71 and MAC address 52:54:00:68:24:73 in network mk-test-preload-829285
	I0914 17:51:15.069632   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHPort
	I0914 17:51:15.069841   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHKeyPath
	I0914 17:51:15.070065   50262 main.go:141] libmachine: (test-preload-829285) Calling .GetSSHUsername
	I0914 17:51:15.070265   50262 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/test-preload-829285/id_rsa Username:docker}
	I0914 17:51:15.172993   50262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:51:15.195265   50262 node_ready.go:35] waiting up to 6m0s for node "test-preload-829285" to be "Ready" ...
	I0914 17:51:15.315462   50262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:51:15.321757   50262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 17:51:16.424858   50262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103055191s)
	I0914 17:51:16.424915   50262 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.109415063s)
	I0914 17:51:16.424939   50262 main.go:141] libmachine: Making call to close driver server
	I0914 17:51:16.424953   50262 main.go:141] libmachine: (test-preload-829285) Calling .Close
	I0914 17:51:16.424957   50262 main.go:141] libmachine: Making call to close driver server
	I0914 17:51:16.424970   50262 main.go:141] libmachine: (test-preload-829285) Calling .Close
	I0914 17:51:16.425238   50262 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:51:16.425254   50262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:51:16.425262   50262 main.go:141] libmachine: Making call to close driver server
	I0914 17:51:16.425270   50262 main.go:141] libmachine: (test-preload-829285) Calling .Close
	I0914 17:51:16.425310   50262 main.go:141] libmachine: (test-preload-829285) DBG | Closing plugin on server side
	I0914 17:51:16.425326   50262 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:51:16.425336   50262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:51:16.425344   50262 main.go:141] libmachine: Making call to close driver server
	I0914 17:51:16.425351   50262 main.go:141] libmachine: (test-preload-829285) Calling .Close
	I0914 17:51:16.425475   50262 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:51:16.425488   50262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:51:16.426287   50262 main.go:141] libmachine: (test-preload-829285) DBG | Closing plugin on server side
	I0914 17:51:16.426284   50262 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:51:16.426307   50262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:51:16.435085   50262 main.go:141] libmachine: Making call to close driver server
	I0914 17:51:16.435122   50262 main.go:141] libmachine: (test-preload-829285) Calling .Close
	I0914 17:51:16.435421   50262 main.go:141] libmachine: Successfully made call to close driver server
	I0914 17:51:16.435439   50262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 17:51:16.435461   50262 main.go:141] libmachine: (test-preload-829285) DBG | Closing plugin on server side
	I0914 17:51:16.437391   50262 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0914 17:51:16.438495   50262 addons.go:510] duration metric: took 1.46635215s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0914 17:51:17.202726   50262 node_ready.go:53] node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:19.699041   50262 node_ready.go:53] node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:21.699075   50262 node_ready.go:53] node "test-preload-829285" has status "Ready":"False"
	I0914 17:51:22.698683   50262 node_ready.go:49] node "test-preload-829285" has status "Ready":"True"
	I0914 17:51:22.698705   50262 node_ready.go:38] duration metric: took 7.503400714s for node "test-preload-829285" to be "Ready" ...
	I0914 17:51:22.698714   50262 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:51:22.703988   50262 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:22.709629   50262 pod_ready.go:93] pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace has status "Ready":"True"
	I0914 17:51:22.709650   50262 pod_ready.go:82] duration metric: took 5.635478ms for pod "coredns-6d4b75cb6d-jh47k" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:22.709659   50262 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:24.716852   50262 pod_ready.go:103] pod "etcd-test-preload-829285" in "kube-system" namespace has status "Ready":"False"
	I0914 17:51:25.716927   50262 pod_ready.go:93] pod "etcd-test-preload-829285" in "kube-system" namespace has status "Ready":"True"
	I0914 17:51:25.716949   50262 pod_ready.go:82] duration metric: took 3.007283789s for pod "etcd-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.716958   50262 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.722636   50262 pod_ready.go:93] pod "kube-apiserver-test-preload-829285" in "kube-system" namespace has status "Ready":"True"
	I0914 17:51:25.722658   50262 pod_ready.go:82] duration metric: took 5.693954ms for pod "kube-apiserver-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.722668   50262 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.728069   50262 pod_ready.go:93] pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace has status "Ready":"True"
	I0914 17:51:25.728089   50262 pod_ready.go:82] duration metric: took 5.415588ms for pod "kube-controller-manager-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.728097   50262 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-szrwb" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.732295   50262 pod_ready.go:93] pod "kube-proxy-szrwb" in "kube-system" namespace has status "Ready":"True"
	I0914 17:51:25.732316   50262 pod_ready.go:82] duration metric: took 4.21309ms for pod "kube-proxy-szrwb" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.732324   50262 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.899790   50262 pod_ready.go:93] pod "kube-scheduler-test-preload-829285" in "kube-system" namespace has status "Ready":"True"
	I0914 17:51:25.899816   50262 pod_ready.go:82] duration metric: took 167.485875ms for pod "kube-scheduler-test-preload-829285" in "kube-system" namespace to be "Ready" ...
	I0914 17:51:25.899827   50262 pod_ready.go:39] duration metric: took 3.20110378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:51:25.899839   50262 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:51:25.899897   50262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:51:25.916386   50262 api_server.go:72] duration metric: took 10.944322715s to wait for apiserver process to appear ...
	I0914 17:51:25.916420   50262 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:51:25.916454   50262 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0914 17:51:25.926394   50262 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0914 17:51:25.927421   50262 api_server.go:141] control plane version: v1.24.4
	I0914 17:51:25.927441   50262 api_server.go:131] duration metric: took 11.015328ms to wait for apiserver health ...
	I0914 17:51:25.927449   50262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:51:26.101321   50262 system_pods.go:59] 7 kube-system pods found
	I0914 17:51:26.101351   50262 system_pods.go:61] "coredns-6d4b75cb6d-jh47k" [efccf469-107d-4db2-8fef-2d64fdaafe35] Running
	I0914 17:51:26.101360   50262 system_pods.go:61] "etcd-test-preload-829285" [d6eec8fa-f272-4ed1-b845-d4e8085c7f7e] Running
	I0914 17:51:26.101364   50262 system_pods.go:61] "kube-apiserver-test-preload-829285" [31e346b4-3246-414e-a424-6f1da20e9e30] Running
	I0914 17:51:26.101368   50262 system_pods.go:61] "kube-controller-manager-test-preload-829285" [46b7e24a-265a-4bd4-9e3a-60a77d1fdc07] Running
	I0914 17:51:26.101370   50262 system_pods.go:61] "kube-proxy-szrwb" [59bb34a9-f9c7-4dd3-a490-4d8454f7d34a] Running
	I0914 17:51:26.101373   50262 system_pods.go:61] "kube-scheduler-test-preload-829285" [2737a10a-22ad-4aed-847a-ac7b09059431] Running
	I0914 17:51:26.101376   50262 system_pods.go:61] "storage-provisioner" [5a7bf799-59b1-47c9-87d9-ec1407b80dd6] Running
	I0914 17:51:26.101382   50262 system_pods.go:74] duration metric: took 173.928304ms to wait for pod list to return data ...
	I0914 17:51:26.101388   50262 default_sa.go:34] waiting for default service account to be created ...
	I0914 17:51:26.298470   50262 default_sa.go:45] found service account: "default"
	I0914 17:51:26.298495   50262 default_sa.go:55] duration metric: took 197.101879ms for default service account to be created ...
	I0914 17:51:26.298503   50262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 17:51:26.501372   50262 system_pods.go:86] 7 kube-system pods found
	I0914 17:51:26.501407   50262 system_pods.go:89] "coredns-6d4b75cb6d-jh47k" [efccf469-107d-4db2-8fef-2d64fdaafe35] Running
	I0914 17:51:26.501414   50262 system_pods.go:89] "etcd-test-preload-829285" [d6eec8fa-f272-4ed1-b845-d4e8085c7f7e] Running
	I0914 17:51:26.501418   50262 system_pods.go:89] "kube-apiserver-test-preload-829285" [31e346b4-3246-414e-a424-6f1da20e9e30] Running
	I0914 17:51:26.501422   50262 system_pods.go:89] "kube-controller-manager-test-preload-829285" [46b7e24a-265a-4bd4-9e3a-60a77d1fdc07] Running
	I0914 17:51:26.501425   50262 system_pods.go:89] "kube-proxy-szrwb" [59bb34a9-f9c7-4dd3-a490-4d8454f7d34a] Running
	I0914 17:51:26.501429   50262 system_pods.go:89] "kube-scheduler-test-preload-829285" [2737a10a-22ad-4aed-847a-ac7b09059431] Running
	I0914 17:51:26.501432   50262 system_pods.go:89] "storage-provisioner" [5a7bf799-59b1-47c9-87d9-ec1407b80dd6] Running
	I0914 17:51:26.501440   50262 system_pods.go:126] duration metric: took 202.93105ms to wait for k8s-apps to be running ...
	I0914 17:51:26.501447   50262 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 17:51:26.501493   50262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:51:26.516372   50262 system_svc.go:56] duration metric: took 14.917482ms WaitForService to wait for kubelet
	I0914 17:51:26.516403   50262 kubeadm.go:582] duration metric: took 11.544345997s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:51:26.516424   50262 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:51:26.698764   50262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 17:51:26.698787   50262 node_conditions.go:123] node cpu capacity is 2
	I0914 17:51:26.698797   50262 node_conditions.go:105] duration metric: took 182.36868ms to run NodePressure ...
	I0914 17:51:26.698807   50262 start.go:241] waiting for startup goroutines ...
	I0914 17:51:26.698814   50262 start.go:246] waiting for cluster config update ...
	I0914 17:51:26.698825   50262 start.go:255] writing updated cluster config ...
	I0914 17:51:26.699082   50262 ssh_runner.go:195] Run: rm -f paused
	I0914 17:51:26.747090   50262 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0914 17:51:26.749057   50262 out.go:201] 
	W0914 17:51:26.750274   50262 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0914 17:51:26.751473   50262 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0914 17:51:26.752656   50262 out.go:177] * Done! kubectl is now configured to use "test-preload-829285" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.605817303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336287605785228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=123aaa3e-38be-4a83-916d-4a0c1c3d6f25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.606438224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5df3b8b5-9d50-4fda-b070-072d25761ebe name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.606517595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5df3b8b5-9d50-4fda-b070-072d25761ebe name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.606707098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73b742fec68b8755b3197502241e879f4bdf9b1d63307451c9a50ccb2111c3a0,PodSandboxId:33607644a451e49637f8e6fd72f32b54e5581b53b5c7c2beaf82ef0c18cc0d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726336281390598280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jh47k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efccf469-107d-4db2-8fef-2d64fdaafe35,},Annotations:map[string]string{io.kubernetes.container.hash: 1a7bb2ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d42948be398098d506eba0cf3484c010b5568248e8bb0b2bb3dc5baa79acde,PodSandboxId:be2272743b0db1640c7c5d071416801ca80b6779491dffd2a48b7f2ad8c77d9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726336274407598858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szrwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 59bb34a9-f9c7-4dd3-a490-4d8454f7d34a,},Annotations:map[string]string{io.kubernetes.container.hash: abd03822,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2155a56432d0911f48e9e3dfaeea167e955679f08de5309c5aa39bb82b9955c1,PodSandboxId:d7588c9c3f912eb252c8c37fdf133b175c04ea89d8c24e133838ce5efd7e8239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726336273860916390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
7bf799-59b1-47c9-87d9-ec1407b80dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 3df2bb38,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edcbb61d31108495b1808e520151bd558cb0b27d4d646af2b61c173de9b1038,PodSandboxId:79035ae565e4ef6f74986e0e51c5dbfbfa6e8a014cd64b13aa6b425dd65cd42a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726336267906662931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0b1ac06274b040b4f27426eaa19f38,},Anno
tations:map[string]string{io.kubernetes.container.hash: 30767ed9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee06ab114a0889ea26475af1eec46e3af166b07e04034e7f1af9703525911e1,PodSandboxId:68ce9569e7d3a5542b26369680a52384a7b5c98f3f2dd36d801a684f930d0e29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726336267849534000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583b84abc36e3a743c6d70f7687c95d6,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:861c9dd26443d7d6e3067370141e5e4489ee448b2fe818a98bc768c6c463a781,PodSandboxId:4702899df19dd1fbfefcaa6b65401964a89d35ca664c419b99dc37ee5d548ab1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726336267838191212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8d03aa6e29ba35d4a0fa8aee04260a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4211d98c72001001977e8b0be50a7a799bf9f02d2f19c3279f83d889b45dbfa,PodSandboxId:d6cfc5124e7aaa9fd998fa0789c95d996a2f1793ed34fb008bbd084e5c1a09e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726336267823533073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39a5ff0b86f18c239768d804a3a3ec,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5df3b8b5-9d50-4fda-b070-072d25761ebe name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.643740245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee8e97f7-28c7-48fe-89de-ebf35c9f83e0 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.643811452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee8e97f7-28c7-48fe-89de-ebf35c9f83e0 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.645161449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e444631-a335-420e-8490-c6969f5ab104 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.645717184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336287645693057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e444631-a335-420e-8490-c6969f5ab104 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.646348254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6db1cbf2-d5db-4173-8d26-84acd18abda1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.646401844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6db1cbf2-d5db-4173-8d26-84acd18abda1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.646589438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73b742fec68b8755b3197502241e879f4bdf9b1d63307451c9a50ccb2111c3a0,PodSandboxId:33607644a451e49637f8e6fd72f32b54e5581b53b5c7c2beaf82ef0c18cc0d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726336281390598280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jh47k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efccf469-107d-4db2-8fef-2d64fdaafe35,},Annotations:map[string]string{io.kubernetes.container.hash: 1a7bb2ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d42948be398098d506eba0cf3484c010b5568248e8bb0b2bb3dc5baa79acde,PodSandboxId:be2272743b0db1640c7c5d071416801ca80b6779491dffd2a48b7f2ad8c77d9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726336274407598858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szrwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 59bb34a9-f9c7-4dd3-a490-4d8454f7d34a,},Annotations:map[string]string{io.kubernetes.container.hash: abd03822,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2155a56432d0911f48e9e3dfaeea167e955679f08de5309c5aa39bb82b9955c1,PodSandboxId:d7588c9c3f912eb252c8c37fdf133b175c04ea89d8c24e133838ce5efd7e8239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726336273860916390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
7bf799-59b1-47c9-87d9-ec1407b80dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 3df2bb38,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edcbb61d31108495b1808e520151bd558cb0b27d4d646af2b61c173de9b1038,PodSandboxId:79035ae565e4ef6f74986e0e51c5dbfbfa6e8a014cd64b13aa6b425dd65cd42a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726336267906662931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0b1ac06274b040b4f27426eaa19f38,},Anno
tations:map[string]string{io.kubernetes.container.hash: 30767ed9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee06ab114a0889ea26475af1eec46e3af166b07e04034e7f1af9703525911e1,PodSandboxId:68ce9569e7d3a5542b26369680a52384a7b5c98f3f2dd36d801a684f930d0e29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726336267849534000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583b84abc36e3a743c6d70f7687c95d6,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:861c9dd26443d7d6e3067370141e5e4489ee448b2fe818a98bc768c6c463a781,PodSandboxId:4702899df19dd1fbfefcaa6b65401964a89d35ca664c419b99dc37ee5d548ab1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726336267838191212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8d03aa6e29ba35d4a0fa8aee04260a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4211d98c72001001977e8b0be50a7a799bf9f02d2f19c3279f83d889b45dbfa,PodSandboxId:d6cfc5124e7aaa9fd998fa0789c95d996a2f1793ed34fb008bbd084e5c1a09e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726336267823533073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39a5ff0b86f18c239768d804a3a3ec,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6db1cbf2-d5db-4173-8d26-84acd18abda1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.688846336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9363066-f5cb-4a57-a8aa-af006dca7fa6 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.688915850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9363066-f5cb-4a57-a8aa-af006dca7fa6 name=/runtime.v1.RuntimeService/Version
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.690452628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b318c8c-c94a-410e-8765-44676edf551a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.691015957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336287690856605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b318c8c-c94a-410e-8765-44676edf551a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.691571885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=966510f8-7041-483b-8f09-bde0291ce0aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.691636046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=966510f8-7041-483b-8f09-bde0291ce0aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.691799917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73b742fec68b8755b3197502241e879f4bdf9b1d63307451c9a50ccb2111c3a0,PodSandboxId:33607644a451e49637f8e6fd72f32b54e5581b53b5c7c2beaf82ef0c18cc0d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726336281390598280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jh47k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efccf469-107d-4db2-8fef-2d64fdaafe35,},Annotations:map[string]string{io.kubernetes.container.hash: 1a7bb2ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d42948be398098d506eba0cf3484c010b5568248e8bb0b2bb3dc5baa79acde,PodSandboxId:be2272743b0db1640c7c5d071416801ca80b6779491dffd2a48b7f2ad8c77d9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726336274407598858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szrwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 59bb34a9-f9c7-4dd3-a490-4d8454f7d34a,},Annotations:map[string]string{io.kubernetes.container.hash: abd03822,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2155a56432d0911f48e9e3dfaeea167e955679f08de5309c5aa39bb82b9955c1,PodSandboxId:d7588c9c3f912eb252c8c37fdf133b175c04ea89d8c24e133838ce5efd7e8239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726336273860916390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
7bf799-59b1-47c9-87d9-ec1407b80dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 3df2bb38,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edcbb61d31108495b1808e520151bd558cb0b27d4d646af2b61c173de9b1038,PodSandboxId:79035ae565e4ef6f74986e0e51c5dbfbfa6e8a014cd64b13aa6b425dd65cd42a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726336267906662931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0b1ac06274b040b4f27426eaa19f38,},Anno
tations:map[string]string{io.kubernetes.container.hash: 30767ed9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee06ab114a0889ea26475af1eec46e3af166b07e04034e7f1af9703525911e1,PodSandboxId:68ce9569e7d3a5542b26369680a52384a7b5c98f3f2dd36d801a684f930d0e29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726336267849534000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583b84abc36e3a743c6d70f7687c95d6,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:861c9dd26443d7d6e3067370141e5e4489ee448b2fe818a98bc768c6c463a781,PodSandboxId:4702899df19dd1fbfefcaa6b65401964a89d35ca664c419b99dc37ee5d548ab1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726336267838191212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8d03aa6e29ba35d4a0fa8aee04260a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4211d98c72001001977e8b0be50a7a799bf9f02d2f19c3279f83d889b45dbfa,PodSandboxId:d6cfc5124e7aaa9fd998fa0789c95d996a2f1793ed34fb008bbd084e5c1a09e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726336267823533073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39a5ff0b86f18c239768d804a3a3ec,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=966510f8-7041-483b-8f09-bde0291ce0aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.723399598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fc10396-acb8-484a-b5f2-8e704dbd300d name=/runtime.v1.RuntimeService/Version
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.723472659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fc10396-acb8-484a-b5f2-8e704dbd300d name=/runtime.v1.RuntimeService/Version
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.724642542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03097a13-fcf2-4cd5-8109-1b08809576a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.725078961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336287725056969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03097a13-fcf2-4cd5-8109-1b08809576a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.725642405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf45d397-9d7c-42ff-a1ac-69531c9dd639 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.725791447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf45d397-9d7c-42ff-a1ac-69531c9dd639 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 17:51:27 test-preload-829285 crio[656]: time="2024-09-14 17:51:27.725962068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:73b742fec68b8755b3197502241e879f4bdf9b1d63307451c9a50ccb2111c3a0,PodSandboxId:33607644a451e49637f8e6fd72f32b54e5581b53b5c7c2beaf82ef0c18cc0d16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726336281390598280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jh47k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efccf469-107d-4db2-8fef-2d64fdaafe35,},Annotations:map[string]string{io.kubernetes.container.hash: 1a7bb2ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2d42948be398098d506eba0cf3484c010b5568248e8bb0b2bb3dc5baa79acde,PodSandboxId:be2272743b0db1640c7c5d071416801ca80b6779491dffd2a48b7f2ad8c77d9c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726336274407598858,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szrwb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 59bb34a9-f9c7-4dd3-a490-4d8454f7d34a,},Annotations:map[string]string{io.kubernetes.container.hash: abd03822,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2155a56432d0911f48e9e3dfaeea167e955679f08de5309c5aa39bb82b9955c1,PodSandboxId:d7588c9c3f912eb252c8c37fdf133b175c04ea89d8c24e133838ce5efd7e8239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726336273860916390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a
7bf799-59b1-47c9-87d9-ec1407b80dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 3df2bb38,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edcbb61d31108495b1808e520151bd558cb0b27d4d646af2b61c173de9b1038,PodSandboxId:79035ae565e4ef6f74986e0e51c5dbfbfa6e8a014cd64b13aa6b425dd65cd42a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726336267906662931,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0b1ac06274b040b4f27426eaa19f38,},Anno
tations:map[string]string{io.kubernetes.container.hash: 30767ed9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee06ab114a0889ea26475af1eec46e3af166b07e04034e7f1af9703525911e1,PodSandboxId:68ce9569e7d3a5542b26369680a52384a7b5c98f3f2dd36d801a684f930d0e29,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726336267849534000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583b84abc36e3a743c6d70f7687c95d6,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:861c9dd26443d7d6e3067370141e5e4489ee448b2fe818a98bc768c6c463a781,PodSandboxId:4702899df19dd1fbfefcaa6b65401964a89d35ca664c419b99dc37ee5d548ab1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726336267838191212,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8d03aa6e29ba35d4a0fa8aee04260a,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e78960ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4211d98c72001001977e8b0be50a7a799bf9f02d2f19c3279f83d889b45dbfa,PodSandboxId:d6cfc5124e7aaa9fd998fa0789c95d996a2f1793ed34fb008bbd084e5c1a09e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726336267823533073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-829285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e39a5ff0b86f18c239768d804a3a3ec,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf45d397-9d7c-42ff-a1ac-69531c9dd639 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	73b742fec68b8       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   33607644a451e       coredns-6d4b75cb6d-jh47k
	d2d42948be398       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   be2272743b0db       kube-proxy-szrwb
	2155a56432d09       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       2                   d7588c9c3f912       storage-provisioner
	2edcbb61d3110       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   79035ae565e4e       etcd-test-preload-829285
	eee06ab114a08       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   68ce9569e7d3a       kube-scheduler-test-preload-829285
	861c9dd26443d       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   4702899df19dd       kube-apiserver-test-preload-829285
	b4211d98c7200       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   d6cfc5124e7aa       kube-controller-manager-test-preload-829285
	
	
	==> coredns [73b742fec68b8755b3197502241e879f4bdf9b1d63307451c9a50ccb2111c3a0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35197 - 28858 "HINFO IN 1391635003718511756.7535648399823771463. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012879804s
	
	
	==> describe nodes <==
	Name:               test-preload-829285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-829285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=test-preload-829285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_49_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:49:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-829285
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:51:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:51:22 +0000   Sat, 14 Sep 2024 17:49:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:51:22 +0000   Sat, 14 Sep 2024 17:49:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:51:22 +0000   Sat, 14 Sep 2024 17:49:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:51:22 +0000   Sat, 14 Sep 2024 17:51:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    test-preload-829285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1ed8dc8aa0a47e9980d6d2b467bb25d
	  System UUID:                b1ed8dc8-aa0a-47e9-980d-6d2b467bb25d
	  Boot ID:                    0e5bd2fc-7bef-43e8-8761-42d2573b5d77
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-jh47k                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m1s
	  kube-system                 etcd-test-preload-829285                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m15s
	  kube-system                 kube-apiserver-test-preload-829285             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-test-preload-829285    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-szrwb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-test-preload-829285             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s (x3 over 2m23s)  kubelet          Node test-preload-829285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m23s (x3 over 2m23s)  kubelet          Node test-preload-829285 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m23s (x3 over 2m23s)  kubelet          Node test-preload-829285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s                  kubelet          Node test-preload-829285 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m15s                  kubelet          Node test-preload-829285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s                  kubelet          Node test-preload-829285 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                2m5s                   kubelet          Node test-preload-829285 status is now: NodeReady
	  Normal  RegisteredNode           2m2s                   node-controller  Node test-preload-829285 event: Registered Node test-preload-829285 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)      kubelet          Node test-preload-829285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)      kubelet          Node test-preload-829285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)      kubelet          Node test-preload-829285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                     node-controller  Node test-preload-829285 event: Registered Node test-preload-829285 in Controller
	
	
	==> dmesg <==
	[Sep14 17:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052366] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038106] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.796963] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.925435] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.412366] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.833289] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.124512] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.189796] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.103306] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.254260] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[Sep14 17:51] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.065175] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.602087] systemd-fstab-generator[1108]: Ignoring "noauto" option for root device
	[  +6.501105] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.617111] systemd-fstab-generator[1738]: Ignoring "noauto" option for root device
	[  +6.123337] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [2edcbb61d31108495b1808e520151bd558cb0b27d4d646af2b61c173de9b1038] <==
	{"level":"info","ts":"2024-09-14T17:51:08.221Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"226d7ac4e2309206","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-14T17:51:08.234Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-14T17:51:08.234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 switched to configuration voters=(2480773955778023942)"}
	{"level":"info","ts":"2024-09-14T17:51:08.234Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"98fbf1e9ed6d9a6e","local-member-id":"226d7ac4e2309206","added-peer-id":"226d7ac4e2309206","added-peer-peer-urls":["https://192.168.39.71:2380"]}
	{"level":"info","ts":"2024-09-14T17:51:08.235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98fbf1e9ed6d9a6e","local-member-id":"226d7ac4e2309206","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:51:08.239Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:51:08.252Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T17:51:08.252Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"226d7ac4e2309206","initial-advertise-peer-urls":["https://192.168.39.71:2380"],"listen-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.71:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T17:51:08.252Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T17:51:08.252Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-14T17:51:08.252Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2024-09-14T17:51:10.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T17:51:10.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T17:51:10.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgPreVoteResp from 226d7ac4e2309206 at term 2"}
	{"level":"info","ts":"2024-09-14T17:51:10.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T17:51:10.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-09-14T17:51:10.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T17:51:10.098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2024-09-14T17:51:10.098Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"226d7ac4e2309206","local-member-attributes":"{Name:test-preload-829285 ClientURLs:[https://192.168.39.71:2379]}","request-path":"/0/members/226d7ac4e2309206/attributes","cluster-id":"98fbf1e9ed6d9a6e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:51:10.098Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:51:10.100Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T17:51:10.100Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:51:10.102Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.71:2379"}
	{"level":"info","ts":"2024-09-14T17:51:10.102Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:51:10.102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:51:28 up 0 min,  0 users,  load average: 1.07, 0.29, 0.10
	Linux test-preload-829285 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [861c9dd26443d7d6e3067370141e5e4489ee448b2fe818a98bc768c6c463a781] <==
	I0914 17:51:12.442374       1 establishing_controller.go:76] Starting EstablishingController
	I0914 17:51:12.442468       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0914 17:51:12.442538       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0914 17:51:12.442580       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0914 17:51:12.442728       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0914 17:51:12.442746       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E0914 17:51:12.541345       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0914 17:51:12.560375       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0914 17:51:12.585486       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 17:51:12.587869       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0914 17:51:12.616898       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 17:51:12.616929       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0914 17:51:12.622785       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 17:51:12.627587       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0914 17:51:12.628676       1 cache.go:39] Caches are synced for autoregister controller
	I0914 17:51:13.073380       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 17:51:13.432366       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 17:51:13.906805       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0914 17:51:13.920478       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0914 17:51:13.976453       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0914 17:51:13.997575       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 17:51:14.006224       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 17:51:14.733275       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0914 17:51:25.626765       1 controller.go:611] quota admission added evaluator for: endpoints
	I0914 17:51:25.844865       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b4211d98c72001001977e8b0be50a7a799bf9f02d2f19c3279f83d889b45dbfa] <==
	I0914 17:51:25.783492       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 17:51:25.786005       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 17:51:25.786094       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 17:51:25.786135       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 17:51:25.793510       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	W0914 17:51:25.809541       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-829285" does not exist
	I0914 17:51:25.833961       1 shared_informer.go:262] Caches are synced for daemon sets
	I0914 17:51:25.834405       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0914 17:51:25.836584       1 shared_informer.go:262] Caches are synced for GC
	I0914 17:51:25.842438       1 shared_informer.go:262] Caches are synced for TTL
	I0914 17:51:25.847281       1 shared_informer.go:262] Caches are synced for attach detach
	I0914 17:51:25.850458       1 shared_informer.go:262] Caches are synced for node
	I0914 17:51:25.850489       1 range_allocator.go:173] Starting range CIDR allocator
	I0914 17:51:25.850495       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0914 17:51:25.850503       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0914 17:51:25.884134       1 shared_informer.go:262] Caches are synced for persistent volume
	I0914 17:51:25.905107       1 shared_informer.go:262] Caches are synced for taint
	I0914 17:51:25.905349       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0914 17:51:25.905472       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-829285. Assuming now as a timestamp.
	I0914 17:51:25.905532       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0914 17:51:25.905476       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0914 17:51:25.905894       1 event.go:294] "Event occurred" object="test-preload-829285" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-829285 event: Registered Node test-preload-829285 in Controller"
	I0914 17:51:26.300428       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 17:51:26.323823       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 17:51:26.323884       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [d2d42948be398098d506eba0cf3484c010b5568248e8bb0b2bb3dc5baa79acde] <==
	I0914 17:51:14.613509       1 node.go:163] Successfully retrieved node IP: 192.168.39.71
	I0914 17:51:14.613581       1 server_others.go:138] "Detected node IP" address="192.168.39.71"
	I0914 17:51:14.613633       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0914 17:51:14.710496       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0914 17:51:14.713819       1 server_others.go:206] "Using iptables Proxier"
	I0914 17:51:14.714481       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0914 17:51:14.715231       1 server.go:661] "Version info" version="v1.24.4"
	I0914 17:51:14.715353       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:51:14.717922       1 config.go:317] "Starting service config controller"
	I0914 17:51:14.718586       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0914 17:51:14.718735       1 config.go:444] "Starting node config controller"
	I0914 17:51:14.721939       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0914 17:51:14.719498       1 config.go:226] "Starting endpoint slice config controller"
	I0914 17:51:14.722263       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0914 17:51:14.819153       1 shared_informer.go:262] Caches are synced for service config
	I0914 17:51:14.822650       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0914 17:51:14.822736       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [eee06ab114a0889ea26475af1eec46e3af166b07e04034e7f1af9703525911e1] <==
	I0914 17:51:08.644755       1 serving.go:348] Generated self-signed cert in-memory
	W0914 17:51:12.474391       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 17:51:12.475055       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 17:51:12.475129       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 17:51:12.475155       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 17:51:12.531229       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0914 17:51:12.531359       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:51:12.538167       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 17:51:12.541834       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 17:51:12.541872       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 17:51:12.541924       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 17:51:12.642192       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 17:51:12 test-preload-829285 kubelet[1115]: I0914 17:51:12.588849    1115 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-829285"
	Sep 14 17:51:12 test-preload-829285 kubelet[1115]: I0914 17:51:12.592005    1115 setters.go:532] "Node became not ready" node="test-preload-829285" condition={Type:Ready Status:False LastHeartbeatTime:2024-09-14 17:51:12.59189699 +0000 UTC m=+5.596720063 LastTransitionTime:2024-09-14 17:51:12.59189699 +0000 UTC m=+5.596720063 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.108139    1115 apiserver.go:52] "Watching apiserver"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.113815    1115 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.114098    1115 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.114198    1115 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: E0914 17:51:13.116399    1115 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jh47k" podUID=efccf469-107d-4db2-8fef-2d64fdaafe35
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.183837    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkqt4\" (UniqueName: \"kubernetes.io/projected/efccf469-107d-4db2-8fef-2d64fdaafe35-kube-api-access-gkqt4\") pod \"coredns-6d4b75cb6d-jh47k\" (UID: \"efccf469-107d-4db2-8fef-2d64fdaafe35\") " pod="kube-system/coredns-6d4b75cb6d-jh47k"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.183892    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5a7bf799-59b1-47c9-87d9-ec1407b80dd6-tmp\") pod \"storage-provisioner\" (UID: \"5a7bf799-59b1-47c9-87d9-ec1407b80dd6\") " pod="kube-system/storage-provisioner"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.183931    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59bb34a9-f9c7-4dd3-a490-4d8454f7d34a-kube-proxy\") pod \"kube-proxy-szrwb\" (UID: \"59bb34a9-f9c7-4dd3-a490-4d8454f7d34a\") " pod="kube-system/kube-proxy-szrwb"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.183954    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2n9\" (UniqueName: \"kubernetes.io/projected/59bb34a9-f9c7-4dd3-a490-4d8454f7d34a-kube-api-access-wx2n9\") pod \"kube-proxy-szrwb\" (UID: \"59bb34a9-f9c7-4dd3-a490-4d8454f7d34a\") " pod="kube-system/kube-proxy-szrwb"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.183973    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krx4\" (UniqueName: \"kubernetes.io/projected/5a7bf799-59b1-47c9-87d9-ec1407b80dd6-kube-api-access-7krx4\") pod \"storage-provisioner\" (UID: \"5a7bf799-59b1-47c9-87d9-ec1407b80dd6\") " pod="kube-system/storage-provisioner"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.183990    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume\") pod \"coredns-6d4b75cb6d-jh47k\" (UID: \"efccf469-107d-4db2-8fef-2d64fdaafe35\") " pod="kube-system/coredns-6d4b75cb6d-jh47k"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.184007    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59bb34a9-f9c7-4dd3-a490-4d8454f7d34a-xtables-lock\") pod \"kube-proxy-szrwb\" (UID: \"59bb34a9-f9c7-4dd3-a490-4d8454f7d34a\") " pod="kube-system/kube-proxy-szrwb"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.184024    1115 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59bb34a9-f9c7-4dd3-a490-4d8454f7d34a-lib-modules\") pod \"kube-proxy-szrwb\" (UID: \"59bb34a9-f9c7-4dd3-a490-4d8454f7d34a\") " pod="kube-system/kube-proxy-szrwb"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: I0914 17:51:13.184038    1115 reconciler.go:159] "Reconciler: start to sync state"
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: E0914 17:51:13.286836    1115 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: E0914 17:51:13.286930    1115 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume podName:efccf469-107d-4db2-8fef-2d64fdaafe35 nodeName:}" failed. No retries permitted until 2024-09-14 17:51:13.786894724 +0000 UTC m=+6.791717808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume") pod "coredns-6d4b75cb6d-jh47k" (UID: "efccf469-107d-4db2-8fef-2d64fdaafe35") : object "kube-system"/"coredns" not registered
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: E0914 17:51:13.792554    1115 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 17:51:13 test-preload-829285 kubelet[1115]: E0914 17:51:13.792635    1115 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume podName:efccf469-107d-4db2-8fef-2d64fdaafe35 nodeName:}" failed. No retries permitted until 2024-09-14 17:51:14.792619773 +0000 UTC m=+7.797442856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume") pod "coredns-6d4b75cb6d-jh47k" (UID: "efccf469-107d-4db2-8fef-2d64fdaafe35") : object "kube-system"/"coredns" not registered
	Sep 14 17:51:14 test-preload-829285 kubelet[1115]: E0914 17:51:14.801024    1115 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 17:51:14 test-preload-829285 kubelet[1115]: E0914 17:51:14.801155    1115 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume podName:efccf469-107d-4db2-8fef-2d64fdaafe35 nodeName:}" failed. No retries permitted until 2024-09-14 17:51:16.801125059 +0000 UTC m=+9.805948143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume") pod "coredns-6d4b75cb6d-jh47k" (UID: "efccf469-107d-4db2-8fef-2d64fdaafe35") : object "kube-system"/"coredns" not registered
	Sep 14 17:51:15 test-preload-829285 kubelet[1115]: E0914 17:51:15.213517    1115 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jh47k" podUID=efccf469-107d-4db2-8fef-2d64fdaafe35
	Sep 14 17:51:16 test-preload-829285 kubelet[1115]: E0914 17:51:16.816709    1115 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 17:51:16 test-preload-829285 kubelet[1115]: E0914 17:51:16.817230    1115 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume podName:efccf469-107d-4db2-8fef-2d64fdaafe35 nodeName:}" failed. No retries permitted until 2024-09-14 17:51:20.817206811 +0000 UTC m=+13.822029884 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/efccf469-107d-4db2-8fef-2d64fdaafe35-config-volume") pod "coredns-6d4b75cb6d-jh47k" (UID: "efccf469-107d-4db2-8fef-2d64fdaafe35") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [2155a56432d0911f48e9e3dfaeea167e955679f08de5309c5aa39bb82b9955c1] <==
	I0914 17:51:14.005860       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-829285 -n test-preload-829285
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-829285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-829285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-829285
E0914 17:51:28.694968   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-829285: (1.154118071s)
--- FAIL: TestPreload (211.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (359.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m0.082436401s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-470019] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-470019" primary control-plane node in "kubernetes-upgrade-470019" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:57:02.357151   55203 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:57:02.357265   55203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:57:02.357275   55203 out.go:358] Setting ErrFile to fd 2...
	I0914 17:57:02.357286   55203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:57:02.357492   55203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:57:02.358059   55203 out.go:352] Setting JSON to false
	I0914 17:57:02.359093   55203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5966,"bootTime":1726330656,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:57:02.359191   55203 start.go:139] virtualization: kvm guest
	I0914 17:57:02.361475   55203 out.go:177] * [kubernetes-upgrade-470019] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:57:02.362691   55203 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:57:02.362690   55203 notify.go:220] Checking for updates...
	I0914 17:57:02.364207   55203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:57:02.365460   55203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:57:02.366617   55203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:57:02.367782   55203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:57:02.368971   55203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:57:02.370630   55203 config.go:182] Loaded profile config "NoKubernetes-710005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0914 17:57:02.370714   55203 config.go:182] Loaded profile config "cert-expiration-724454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:57:02.370797   55203 config.go:182] Loaded profile config "force-systemd-flag-213182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:57:02.370881   55203 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:57:02.408591   55203 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 17:57:02.409810   55203 start.go:297] selected driver: kvm2
	I0914 17:57:02.409822   55203 start.go:901] validating driver "kvm2" against <nil>
	I0914 17:57:02.409833   55203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:57:02.410547   55203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:57:02.410627   55203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:57:02.427068   55203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:57:02.427124   55203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:57:02.427409   55203 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 17:57:02.427439   55203 cni.go:84] Creating CNI manager for ""
	I0914 17:57:02.427492   55203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:57:02.427503   55203 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 17:57:02.427574   55203 start.go:340] cluster config:
	{Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:57:02.427694   55203 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:57:02.429376   55203 out.go:177] * Starting "kubernetes-upgrade-470019" primary control-plane node in "kubernetes-upgrade-470019" cluster
	I0914 17:57:02.430501   55203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 17:57:02.430552   55203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 17:57:02.430578   55203 cache.go:56] Caching tarball of preloaded images
	I0914 17:57:02.430657   55203 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:57:02.430670   55203 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 17:57:02.430765   55203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/config.json ...
	I0914 17:57:02.430793   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/config.json: {Name:mkbaa2256821afad6c88efbd5319133d7cb4a726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:57:02.430941   55203 start.go:360] acquireMachinesLock for kubernetes-upgrade-470019: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:57:34.082802   55203 start.go:364] duration metric: took 31.651828496s to acquireMachinesLock for "kubernetes-upgrade-470019"
	I0914 17:57:34.082868   55203 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:57:34.082995   55203 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 17:57:34.085508   55203 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:57:34.085724   55203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:57:34.085773   55203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:57:34.103665   55203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0914 17:57:34.104132   55203 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:57:34.104759   55203 main.go:141] libmachine: Using API Version  1
	I0914 17:57:34.104787   55203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:57:34.105230   55203 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:57:34.105414   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 17:57:34.105527   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:34.105741   55203 start.go:159] libmachine.API.Create for "kubernetes-upgrade-470019" (driver="kvm2")
	I0914 17:57:34.105776   55203 client.go:168] LocalClient.Create starting
	I0914 17:57:34.105813   55203 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:57:34.105859   55203 main.go:141] libmachine: Decoding PEM data...
	I0914 17:57:34.105878   55203 main.go:141] libmachine: Parsing certificate...
	I0914 17:57:34.105953   55203 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:57:34.105983   55203 main.go:141] libmachine: Decoding PEM data...
	I0914 17:57:34.106001   55203 main.go:141] libmachine: Parsing certificate...
	I0914 17:57:34.106034   55203 main.go:141] libmachine: Running pre-create checks...
	I0914 17:57:34.106048   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .PreCreateCheck
	I0914 17:57:34.106528   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetConfigRaw
	I0914 17:57:34.106997   55203 main.go:141] libmachine: Creating machine...
	I0914 17:57:34.107015   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .Create
	I0914 17:57:34.107152   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Creating KVM machine...
	I0914 17:57:34.108685   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found existing default KVM network
	I0914 17:57:34.110041   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.109862   55482 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e9:36:2e} reservation:<nil>}
	I0914 17:57:34.111042   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.110944   55482 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:51:2b:d7} reservation:<nil>}
	I0914 17:57:34.112067   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.111974   55482 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:77:85:7b} reservation:<nil>}
	I0914 17:57:34.113165   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.113089   55482 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5940}
	I0914 17:57:34.113235   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | created network xml: 
	I0914 17:57:34.113257   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | <network>
	I0914 17:57:34.113270   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |   <name>mk-kubernetes-upgrade-470019</name>
	I0914 17:57:34.113286   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |   <dns enable='no'/>
	I0914 17:57:34.113310   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |   
	I0914 17:57:34.113322   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0914 17:57:34.113335   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |     <dhcp>
	I0914 17:57:34.113348   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0914 17:57:34.113374   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |     </dhcp>
	I0914 17:57:34.113383   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |   </ip>
	I0914 17:57:34.113392   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG |   
	I0914 17:57:34.113399   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | </network>
	I0914 17:57:34.113405   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | 
	I0914 17:57:34.118495   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | trying to create private KVM network mk-kubernetes-upgrade-470019 192.168.72.0/24...
	I0914 17:57:34.193142   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | private KVM network mk-kubernetes-upgrade-470019 192.168.72.0/24 created
	I0914 17:57:34.193189   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.193127   55482 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:57:34.193211   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019 ...
	I0914 17:57:34.193236   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:57:34.193329   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:57:34.455288   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.455152   55482 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa...
	I0914 17:57:34.532495   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.532368   55482 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/kubernetes-upgrade-470019.rawdisk...
	I0914 17:57:34.532526   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Writing magic tar header
	I0914 17:57:34.532544   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Writing SSH key tar header
	I0914 17:57:34.532558   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:34.532480   55482 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019 ...
	I0914 17:57:34.532605   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019
	I0914 17:57:34.532619   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:57:34.532641   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019 (perms=drwx------)
	I0914 17:57:34.532654   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:57:34.532668   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:57:34.532676   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:57:34.532687   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:57:34.532701   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:57:34.532712   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Checking permissions on dir: /home
	I0914 17:57:34.532725   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:57:34.532740   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:57:34.532752   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:57:34.532762   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:57:34.532772   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Creating domain...
	I0914 17:57:34.532781   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Skipping /home - not owner
	I0914 17:57:34.533747   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) define libvirt domain using xml: 
	I0914 17:57:34.533772   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) <domain type='kvm'>
	I0914 17:57:34.533801   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <name>kubernetes-upgrade-470019</name>
	I0914 17:57:34.533814   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <memory unit='MiB'>2200</memory>
	I0914 17:57:34.533822   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <vcpu>2</vcpu>
	I0914 17:57:34.533833   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <features>
	I0914 17:57:34.533862   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <acpi/>
	I0914 17:57:34.533870   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <apic/>
	I0914 17:57:34.533875   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <pae/>
	I0914 17:57:34.533882   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     
	I0914 17:57:34.533888   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   </features>
	I0914 17:57:34.533894   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <cpu mode='host-passthrough'>
	I0914 17:57:34.533899   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   
	I0914 17:57:34.533906   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   </cpu>
	I0914 17:57:34.533934   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <os>
	I0914 17:57:34.533961   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <type>hvm</type>
	I0914 17:57:34.533976   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <boot dev='cdrom'/>
	I0914 17:57:34.533986   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <boot dev='hd'/>
	I0914 17:57:34.534010   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <bootmenu enable='no'/>
	I0914 17:57:34.534034   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   </os>
	I0914 17:57:34.534044   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   <devices>
	I0914 17:57:34.534052   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <disk type='file' device='cdrom'>
	I0914 17:57:34.534063   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/boot2docker.iso'/>
	I0914 17:57:34.534068   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <target dev='hdc' bus='scsi'/>
	I0914 17:57:34.534073   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <readonly/>
	I0914 17:57:34.534077   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </disk>
	I0914 17:57:34.534083   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <disk type='file' device='disk'>
	I0914 17:57:34.534092   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:57:34.534109   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/kubernetes-upgrade-470019.rawdisk'/>
	I0914 17:57:34.534121   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <target dev='hda' bus='virtio'/>
	I0914 17:57:34.534129   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </disk>
	I0914 17:57:34.534137   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <interface type='network'>
	I0914 17:57:34.534146   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <source network='mk-kubernetes-upgrade-470019'/>
	I0914 17:57:34.534153   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <model type='virtio'/>
	I0914 17:57:34.534183   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </interface>
	I0914 17:57:34.534194   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <interface type='network'>
	I0914 17:57:34.534204   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <source network='default'/>
	I0914 17:57:34.534211   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <model type='virtio'/>
	I0914 17:57:34.534219   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </interface>
	I0914 17:57:34.534226   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <serial type='pty'>
	I0914 17:57:34.534235   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <target port='0'/>
	I0914 17:57:34.534244   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </serial>
	I0914 17:57:34.534255   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <console type='pty'>
	I0914 17:57:34.534263   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <target type='serial' port='0'/>
	I0914 17:57:34.534280   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </console>
	I0914 17:57:34.534307   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     <rng model='virtio'>
	I0914 17:57:34.534321   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)       <backend model='random'>/dev/random</backend>
	I0914 17:57:34.534335   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     </rng>
	I0914 17:57:34.534346   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     
	I0914 17:57:34.534356   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)     
	I0914 17:57:34.534376   55203 main.go:141] libmachine: (kubernetes-upgrade-470019)   </devices>
	I0914 17:57:34.534385   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) </domain>
	I0914 17:57:34.534396   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) 
	I0914 17:57:34.542785   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:df:6a:b6 in network default
	I0914 17:57:34.543483   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Ensuring networks are active...
	I0914 17:57:34.543508   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:34.544417   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Ensuring network default is active
	I0914 17:57:34.544854   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Ensuring network mk-kubernetes-upgrade-470019 is active
	I0914 17:57:34.545555   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Getting domain xml...
	I0914 17:57:34.546476   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Creating domain...
	I0914 17:57:36.476354   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Waiting to get IP...
	I0914 17:57:36.477254   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:36.477707   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:36.477761   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:36.477695   55482 retry.go:31] will retry after 257.538442ms: waiting for machine to come up
	I0914 17:57:36.737322   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:36.780044   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:36.780071   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:36.779939   55482 retry.go:31] will retry after 352.670011ms: waiting for machine to come up
	I0914 17:57:37.380596   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:37.381670   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:37.381708   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:37.381633   55482 retry.go:31] will retry after 341.213029ms: waiting for machine to come up
	I0914 17:57:37.724177   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:37.724679   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:37.724705   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:37.724648   55482 retry.go:31] will retry after 544.395684ms: waiting for machine to come up
	I0914 17:57:38.270447   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:38.270962   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:38.270987   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:38.270913   55482 retry.go:31] will retry after 585.864169ms: waiting for machine to come up
	I0914 17:57:38.858603   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:38.859094   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:38.859127   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:38.859036   55482 retry.go:31] will retry after 669.82973ms: waiting for machine to come up
	I0914 17:57:39.530457   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:39.530890   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:39.530920   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:39.530838   55482 retry.go:31] will retry after 746.255656ms: waiting for machine to come up
	I0914 17:57:40.278873   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:40.279417   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:40.279445   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:40.279375   55482 retry.go:31] will retry after 1.23218056s: waiting for machine to come up
	I0914 17:57:41.513610   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:41.514046   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:41.514070   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:41.514016   55482 retry.go:31] will retry after 1.720700035s: waiting for machine to come up
	I0914 17:57:43.235804   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:43.236265   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:43.236292   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:43.236193   55482 retry.go:31] will retry after 1.740459523s: waiting for machine to come up
	I0914 17:57:44.978643   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:44.979199   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:44.979229   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:44.979166   55482 retry.go:31] will retry after 2.711448528s: waiting for machine to come up
	I0914 17:57:47.692201   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:47.692842   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:47.692863   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:47.692783   55482 retry.go:31] will retry after 3.399434933s: waiting for machine to come up
	I0914 17:57:51.094696   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:51.095096   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find current IP address of domain kubernetes-upgrade-470019 in network mk-kubernetes-upgrade-470019
	I0914 17:57:51.095116   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | I0914 17:57:51.095063   55482 retry.go:31] will retry after 4.204084773s: waiting for machine to come up
	I0914 17:57:55.300065   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.300534   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has current primary IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.300550   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Found IP for machine: 192.168.72.202
	I0914 17:57:55.300563   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Reserving static IP address...
	I0914 17:57:55.300881   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-470019", mac: "52:54:00:5c:27:09", ip: "192.168.72.202"} in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.382222   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Getting to WaitForSSH function...
	I0914 17:57:55.382259   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Reserved static IP address: 192.168.72.202
	I0914 17:57:55.382272   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Waiting for SSH to be available...
	I0914 17:57:55.384852   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.385222   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.385248   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.385359   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Using SSH client type: external
	I0914 17:57:55.385383   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa (-rw-------)
	I0914 17:57:55.385411   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:57:55.385425   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | About to run SSH command:
	I0914 17:57:55.385439   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | exit 0
	I0914 17:57:55.514134   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | SSH cmd err, output: <nil>: 
	I0914 17:57:55.514435   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) KVM machine creation complete!
	I0914 17:57:55.514776   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetConfigRaw
	I0914 17:57:55.515350   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:55.515560   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:55.515726   55203 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:57:55.515740   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetState
	I0914 17:57:55.517136   55203 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:57:55.517149   55203 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:57:55.517153   55203 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:57:55.517159   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:55.519485   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.519889   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.519919   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.520061   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:55.520232   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.520385   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.520486   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:55.520652   55203 main.go:141] libmachine: Using SSH client type: native
	I0914 17:57:55.520897   55203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 17:57:55.520912   55203 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:57:55.625752   55203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:57:55.625782   55203 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:57:55.625794   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:55.628477   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.628882   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.628916   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.629034   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:55.629224   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.629374   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.629514   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:55.629705   55203 main.go:141] libmachine: Using SSH client type: native
	I0914 17:57:55.629918   55203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 17:57:55.629938   55203 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:57:55.734994   55203 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:57:55.735047   55203 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:57:55.735060   55203 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:57:55.735071   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 17:57:55.735373   55203 buildroot.go:166] provisioning hostname "kubernetes-upgrade-470019"
	I0914 17:57:55.735408   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 17:57:55.735683   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:55.738684   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.739163   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.739213   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.739412   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:55.739648   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.739816   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.739990   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:55.740121   55203 main.go:141] libmachine: Using SSH client type: native
	I0914 17:57:55.740298   55203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 17:57:55.740310   55203 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470019 && echo "kubernetes-upgrade-470019" | sudo tee /etc/hostname
	I0914 17:57:55.857212   55203 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470019
	
	I0914 17:57:55.857249   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:55.860580   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.860947   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.860987   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.861244   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:55.861443   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.861585   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:55.861724   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:55.861886   55203 main.go:141] libmachine: Using SSH client type: native
	I0914 17:57:55.862105   55203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 17:57:55.862130   55203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470019/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:57:55.975163   55203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:57:55.975194   55203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:57:55.975215   55203 buildroot.go:174] setting up certificates
	I0914 17:57:55.975225   55203 provision.go:84] configureAuth start
	I0914 17:57:55.975234   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 17:57:55.975501   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 17:57:55.978098   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.978448   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.978478   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.978626   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:55.980696   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.981039   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:55.981066   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:55.981206   55203 provision.go:143] copyHostCerts
	I0914 17:57:55.981257   55203 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:57:55.981268   55203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:57:55.981325   55203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:57:55.981408   55203 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:57:55.981416   55203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:57:55.981436   55203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:57:55.981492   55203 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:57:55.981499   55203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:57:55.981518   55203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:57:55.981565   55203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470019 san=[127.0.0.1 192.168.72.202 kubernetes-upgrade-470019 localhost minikube]
	I0914 17:57:56.082546   55203 provision.go:177] copyRemoteCerts
	I0914 17:57:56.082618   55203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:57:56.082644   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:56.085331   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.085746   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.085773   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.085982   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:56.086121   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.086295   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:56.086445   55203 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 17:57:56.168598   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:57:56.195975   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 17:57:56.219436   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:57:56.242215   55203 provision.go:87] duration metric: took 266.977525ms to configureAuth
	I0914 17:57:56.242256   55203 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:57:56.242441   55203 config.go:182] Loaded profile config "kubernetes-upgrade-470019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 17:57:56.242521   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:56.244967   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.245326   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.245353   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.245531   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:56.245680   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.245844   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.245995   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:56.246181   55203 main.go:141] libmachine: Using SSH client type: native
	I0914 17:57:56.246363   55203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 17:57:56.246381   55203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:57:56.470289   55203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:57:56.470320   55203 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:57:56.470334   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetURL
	I0914 17:57:56.471737   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | Using libvirt version 6000000
	I0914 17:57:56.474436   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.474836   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.474871   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.474993   55203 main.go:141] libmachine: Docker is up and running!
	I0914 17:57:56.475040   55203 main.go:141] libmachine: Reticulating splines...
	I0914 17:57:56.475048   55203 client.go:171] duration metric: took 22.369261803s to LocalClient.Create
	I0914 17:57:56.475077   55203 start.go:167] duration metric: took 22.369338884s to libmachine.API.Create "kubernetes-upgrade-470019"
	I0914 17:57:56.475090   55203 start.go:293] postStartSetup for "kubernetes-upgrade-470019" (driver="kvm2")
	I0914 17:57:56.475102   55203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:57:56.475122   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:56.475457   55203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:57:56.475483   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:56.477974   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.478365   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.478390   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.478534   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:56.478711   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.478872   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:56.478986   55203 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 17:57:56.560039   55203 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:57:56.564053   55203 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:57:56.564083   55203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:57:56.564173   55203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:57:56.564311   55203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:57:56.564445   55203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:57:56.573461   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:57:56.596743   55203 start.go:296] duration metric: took 121.635834ms for postStartSetup
	I0914 17:57:56.596797   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetConfigRaw
	I0914 17:57:56.597408   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 17:57:56.600067   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.600418   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.600459   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.600696   55203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/config.json ...
	I0914 17:57:56.600893   55203 start.go:128] duration metric: took 22.517883446s to createHost
	I0914 17:57:56.600933   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:56.603314   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.603646   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.603676   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.603803   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:56.603990   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.604152   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.604310   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:56.604466   55203 main.go:141] libmachine: Using SSH client type: native
	I0914 17:57:56.604653   55203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 17:57:56.604670   55203 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:57:56.706722   55203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726336676.679015508
	
	I0914 17:57:56.706754   55203 fix.go:216] guest clock: 1726336676.679015508
	I0914 17:57:56.706761   55203 fix.go:229] Guest: 2024-09-14 17:57:56.679015508 +0000 UTC Remote: 2024-09-14 17:57:56.600909785 +0000 UTC m=+54.278757469 (delta=78.105723ms)
	I0914 17:57:56.706796   55203 fix.go:200] guest clock delta is within tolerance: 78.105723ms
	I0914 17:57:56.706803   55203 start.go:83] releasing machines lock for "kubernetes-upgrade-470019", held for 22.623965899s
	I0914 17:57:56.706837   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:56.707102   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 17:57:56.710208   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.710625   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.710659   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.710765   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:56.711338   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:56.711532   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 17:57:56.711648   55203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:57:56.711747   55203 ssh_runner.go:195] Run: cat /version.json
	I0914 17:57:56.711779   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:56.711752   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 17:57:56.714549   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.714784   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.714926   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.714952   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.715082   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:56.715173   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:56.715208   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:56.715258   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.715369   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 17:57:56.715451   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:56.715560   55203 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 17:57:56.715576   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 17:57:56.715696   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 17:57:56.715822   55203 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 17:57:56.831046   55203 ssh_runner.go:195] Run: systemctl --version
	I0914 17:57:56.837034   55203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:57:57.001871   55203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:57:57.008172   55203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:57:57.008253   55203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:57:57.024242   55203 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:57:57.024266   55203 start.go:495] detecting cgroup driver to use...
	I0914 17:57:57.024325   55203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:57:57.042642   55203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:57:57.056912   55203 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:57:57.056970   55203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:57:57.077043   55203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:57:57.095414   55203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:57:57.205823   55203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:57:57.368498   55203 docker.go:233] disabling docker service ...
	I0914 17:57:57.368573   55203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:57:57.382695   55203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:57:57.395929   55203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:57:57.544611   55203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:57:57.696787   55203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:57:57.710412   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:57:57.728701   55203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 17:57:57.728774   55203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:57:57.738900   55203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:57:57.738963   55203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:57:57.750456   55203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:57:57.762894   55203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:57:57.773764   55203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:57:57.784630   55203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:57:57.794524   55203 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:57:57.794603   55203 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:57:57.807916   55203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:57:57.817534   55203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:57:57.949994   55203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:57:58.048027   55203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:57:58.048105   55203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:57:58.052913   55203 start.go:563] Will wait 60s for crictl version
	I0914 17:57:58.052987   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:57:58.057087   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:57:58.098309   55203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:57:58.098398   55203 ssh_runner.go:195] Run: crio --version
	I0914 17:57:58.127419   55203 ssh_runner.go:195] Run: crio --version
	I0914 17:57:58.156639   55203 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 17:57:58.157943   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 17:57:58.161034   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:58.161455   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 18:57:49 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 17:57:58.161487   55203 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 17:57:58.161746   55203 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 17:57:58.165801   55203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:57:58.178022   55203 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:57:58.178135   55203 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 17:57:58.178212   55203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:57:58.211978   55203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 17:57:58.212058   55203 ssh_runner.go:195] Run: which lz4
	I0914 17:57:58.215893   55203 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 17:57:58.220147   55203 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 17:57:58.220181   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 17:57:59.780584   55203 crio.go:462] duration metric: took 1.56472769s to copy over tarball
	I0914 17:57:59.780672   55203 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 17:58:02.376766   55203 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596058762s)
	I0914 17:58:02.376799   55203 crio.go:469] duration metric: took 2.596183088s to extract the tarball
	I0914 17:58:02.376809   55203 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 17:58:02.420373   55203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:58:02.464923   55203 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 17:58:02.464953   55203 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 17:58:02.465012   55203 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:58:02.465062   55203 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 17:58:02.465087   55203 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 17:58:02.465097   55203 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:02.465062   55203 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:02.465118   55203 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:02.465095   55203 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:02.465059   55203 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:02.466514   55203 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:02.466625   55203 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:02.466636   55203 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 17:58:02.466643   55203 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:02.466515   55203 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:02.466660   55203 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 17:58:02.466660   55203 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:02.466701   55203 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:58:02.678336   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 17:58:02.692914   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:02.698968   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:02.711757   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:02.718083   55203 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 17:58:02.718130   55203 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 17:58:02.718198   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.732916   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:02.733908   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:02.742526   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 17:58:02.797008   55203 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 17:58:02.797059   55203 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:02.797115   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.816752   55203 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 17:58:02.816796   55203 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:02.816841   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.816854   55203 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 17:58:02.816891   55203 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:02.816917   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 17:58:02.816929   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.865988   55203 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 17:58:02.866045   55203 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 17:58:02.866065   55203 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:02.866077   55203 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:02.866101   55203 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 17:58:02.866121   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.866124   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.866129   55203 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 17:58:02.866171   55203 ssh_runner.go:195] Run: which crictl
	I0914 17:58:02.866222   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:02.866276   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:02.900904   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:02.900971   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 17:58:02.901043   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:02.901080   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 17:58:02.961381   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:02.961468   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:02.961503   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:03.011901   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:03.039719   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 17:58:03.039771   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:03.039822   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 17:58:03.128440   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:03.128440   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:58:03.143668   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:58:03.182127   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:58:03.182135   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 17:58:03.182244   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:58:03.182254   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 17:58:03.263392   55203 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 17:58:03.263394   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 17:58:03.278779   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 17:58:03.332488   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 17:58:03.332507   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 17:58:03.332580   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 17:58:03.332609   55203 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 17:58:03.754324   55203 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:58:03.898343   55203 cache_images.go:92] duration metric: took 1.433369805s to LoadCachedImages
	W0914 17:58:03.898471   55203 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0914 17:58:03.898488   55203 kubeadm.go:934] updating node { 192.168.72.202 8443 v1.20.0 crio true true} ...
	I0914 17:58:03.898609   55203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-470019 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:58:03.898689   55203 ssh_runner.go:195] Run: crio config
	I0914 17:58:03.944876   55203 cni.go:84] Creating CNI manager for ""
	I0914 17:58:03.944903   55203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:58:03.944915   55203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:58:03.944940   55203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.202 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470019 NodeName:kubernetes-upgrade-470019 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 17:58:03.945128   55203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-470019"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:58:03.945238   55203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 17:58:03.955189   55203 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:58:03.955269   55203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 17:58:03.965261   55203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0914 17:58:03.983394   55203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:58:03.999986   55203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0914 17:58:04.016723   55203 ssh_runner.go:195] Run: grep 192.168.72.202	control-plane.minikube.internal$ /etc/hosts
	I0914 17:58:04.020850   55203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:58:04.033111   55203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:58:04.159846   55203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:58:04.180720   55203 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019 for IP: 192.168.72.202
	I0914 17:58:04.180748   55203 certs.go:194] generating shared ca certs ...
	I0914 17:58:04.180769   55203 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.180970   55203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:58:04.181032   55203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:58:04.181045   55203 certs.go:256] generating profile certs ...
	I0914 17:58:04.181112   55203 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.key
	I0914 17:58:04.181135   55203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.crt with IP's: []
	I0914 17:58:04.324407   55203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.crt ...
	I0914 17:58:04.324440   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.crt: {Name:mk3f9ac575c56445eb7c64bf1e2feea730b87367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.324656   55203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.key ...
	I0914 17:58:04.324677   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.key: {Name:mka167eed59133b912c013775382d0d158714256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.324784   55203 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key.f0b2fee7
	I0914 17:58:04.324805   55203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt.f0b2fee7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.202]
	I0914 17:58:04.624769   55203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt.f0b2fee7 ...
	I0914 17:58:04.624804   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt.f0b2fee7: {Name:mk63c0548847f4a0202a9cbc226de70e1dca1f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.624967   55203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key.f0b2fee7 ...
	I0914 17:58:04.624984   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key.f0b2fee7: {Name:mk4473c249284211fd5f2ad1c2d9859cbff1c5da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.625055   55203 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt.f0b2fee7 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt
	I0914 17:58:04.625125   55203 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key.f0b2fee7 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key
	I0914 17:58:04.625175   55203 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.key
	I0914 17:58:04.625194   55203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.crt with IP's: []
	I0914 17:58:04.786753   55203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.crt ...
	I0914 17:58:04.786781   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.crt: {Name:mk194e06e8b331fdb6c38ca2be2b82fac94117f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.786949   55203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.key ...
	I0914 17:58:04.786965   55203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.key: {Name:mk4dc8a0a87ecfa14e761f622822bbe4f0fc7e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:04.787126   55203 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:58:04.787161   55203 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:58:04.787172   55203 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:58:04.787195   55203 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:58:04.787218   55203 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:58:04.787239   55203 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:58:04.787281   55203 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:58:04.787814   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:58:04.815528   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:58:04.842009   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:58:04.866131   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:58:04.891153   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0914 17:58:04.915584   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:58:04.940580   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:58:04.969872   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:58:05.011899   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:58:05.039059   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:58:05.072869   55203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:58:05.101309   55203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:58:05.119417   55203 ssh_runner.go:195] Run: openssl version
	I0914 17:58:05.125930   55203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:58:05.138123   55203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:58:05.143292   55203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:58:05.143381   55203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:58:05.149535   55203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:58:05.161196   55203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:58:05.172670   55203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:58:05.177658   55203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:58:05.177722   55203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:58:05.183825   55203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:58:05.195189   55203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:58:05.206281   55203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:58:05.211065   55203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:58:05.211124   55203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:58:05.217041   55203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:58:05.230081   55203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:58:05.234736   55203 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:58:05.234804   55203 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:58:05.234886   55203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:58:05.234958   55203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:58:05.271662   55203 cri.go:89] found id: ""
	I0914 17:58:05.271730   55203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 17:58:05.281932   55203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 17:58:05.293652   55203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 17:58:05.304433   55203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 17:58:05.304458   55203 kubeadm.go:157] found existing configuration files:
	
	I0914 17:58:05.304519   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 17:58:05.314505   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 17:58:05.314587   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 17:58:05.325140   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 17:58:05.336728   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 17:58:05.336798   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 17:58:05.347993   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 17:58:05.359488   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 17:58:05.359563   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 17:58:05.370975   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 17:58:05.381820   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 17:58:05.381970   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 17:58:05.393859   55203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 17:58:05.514986   55203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 17:58:05.515057   55203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 17:58:05.678431   55203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 17:58:05.678571   55203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 17:58:05.678692   55203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 17:58:05.900956   55203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 17:58:06.030587   55203 out.go:235]   - Generating certificates and keys ...
	I0914 17:58:06.030728   55203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 17:58:06.030821   55203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 17:58:06.030920   55203 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 17:58:06.525884   55203 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 17:58:06.750963   55203 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 17:58:06.902059   55203 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 17:58:07.172409   55203 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 17:58:07.172624   55203 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-470019 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	I0914 17:58:07.285450   55203 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 17:58:07.285702   55203 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-470019 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	I0914 17:58:07.456788   55203 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 17:58:08.012054   55203 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 17:58:08.326914   55203 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 17:58:08.327002   55203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 17:58:08.624780   55203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 17:58:08.976008   55203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 17:58:09.057577   55203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 17:58:09.311680   55203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 17:58:09.327775   55203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 17:58:09.327863   55203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 17:58:09.327969   55203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 17:58:09.464697   55203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 17:58:09.466412   55203 out.go:235]   - Booting up control plane ...
	I0914 17:58:09.466547   55203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 17:58:09.473292   55203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 17:58:09.474124   55203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 17:58:09.476329   55203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 17:58:09.478979   55203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 17:58:49.472362   55203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 17:58:49.472456   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 17:58:49.472741   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 17:58:54.473405   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 17:58:54.473730   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 17:59:04.472535   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 17:59:04.472810   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 17:59:24.472309   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 17:59:24.472549   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:00:04.474101   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:04.474406   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:00:04.474434   55203 kubeadm.go:310] 
	I0914 18:00:04.474494   55203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:00:04.474578   55203 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:00:04.474598   55203 kubeadm.go:310] 
	I0914 18:00:04.474644   55203 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:00:04.474683   55203 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:00:04.474842   55203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:00:04.474863   55203 kubeadm.go:310] 
	I0914 18:00:04.475006   55203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:00:04.475066   55203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:00:04.475136   55203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:00:04.475154   55203 kubeadm.go:310] 
	I0914 18:00:04.475347   55203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:00:04.475459   55203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:00:04.475471   55203 kubeadm.go:310] 
	I0914 18:00:04.475594   55203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:00:04.475715   55203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:00:04.475807   55203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:00:04.475892   55203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:00:04.475907   55203 kubeadm.go:310] 
	I0914 18:00:04.476264   55203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:00:04.476392   55203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:00:04.476450   55203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:00:04.476606   55203 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-470019 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-470019 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-470019 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-470019 localhost] and IPs [192.168.72.202 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:00:04.476662   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:00:05.295370   55203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:00:05.314120   55203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:00:05.325124   55203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:00:05.325148   55203 kubeadm.go:157] found existing configuration files:
	
	I0914 18:00:05.325207   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:00:05.335402   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:00:05.335477   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:00:05.345199   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:00:05.357233   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:00:05.357293   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:00:05.367889   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:00:05.378333   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:00:05.378408   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:00:05.388257   55203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:00:05.398332   55203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:00:05.398394   55203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:00:05.410985   55203 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:00:05.508566   55203 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:00:05.509087   55203 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:00:05.674398   55203 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:00:05.674575   55203 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:00:05.674718   55203 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:00:05.866227   55203 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:00:05.868014   55203 out.go:235]   - Generating certificates and keys ...
	I0914 18:00:05.868119   55203 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:00:05.868204   55203 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:00:05.868316   55203 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:00:05.868396   55203 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:00:05.868510   55203 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:00:05.868618   55203 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:00:05.868715   55203 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:00:05.868795   55203 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:00:05.868882   55203 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:00:05.868989   55203 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:00:05.869074   55203 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:00:05.869173   55203 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:00:05.967162   55203 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:00:06.339715   55203 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:00:06.481848   55203 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:00:06.621970   55203 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:00:06.644135   55203 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:00:06.646351   55203 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:00:06.646414   55203 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:00:06.786761   55203 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:00:06.789326   55203 out.go:235]   - Booting up control plane ...
	I0914 18:00:06.789547   55203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:00:06.794640   55203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:00:06.795660   55203 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:00:06.796478   55203 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:00:06.799298   55203 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:00:46.801762   55203 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:00:46.801997   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:46.802414   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:00:51.803028   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:51.803315   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:01:01.804111   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:01:01.804293   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:01:21.803359   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:01:21.803621   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:02:01.802850   55203 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:02:01.803087   55203 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:02:01.803102   55203 kubeadm.go:310] 
	I0914 18:02:01.803149   55203 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:02:01.803295   55203 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:02:01.803327   55203 kubeadm.go:310] 
	I0914 18:02:01.803379   55203 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:02:01.803427   55203 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:02:01.803567   55203 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:02:01.803580   55203 kubeadm.go:310] 
	I0914 18:02:01.803718   55203 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:02:01.803779   55203 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:02:01.803827   55203 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:02:01.803836   55203 kubeadm.go:310] 
	I0914 18:02:01.803950   55203 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:02:01.804072   55203 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:02:01.804083   55203 kubeadm.go:310] 
	I0914 18:02:01.804224   55203 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:02:01.804340   55203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:02:01.804438   55203 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:02:01.804557   55203 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:02:01.804572   55203 kubeadm.go:310] 
	I0914 18:02:01.806087   55203 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:02:01.806221   55203 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:02:01.806323   55203 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:02:01.806389   55203 kubeadm.go:394] duration metric: took 3m56.571588435s to StartCluster
	I0914 18:02:01.806434   55203 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:02:01.806496   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:02:01.852933   55203 cri.go:89] found id: ""
	I0914 18:02:01.852950   55203 logs.go:276] 0 containers: []
	W0914 18:02:01.852958   55203 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:02:01.852965   55203 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:02:01.853030   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:02:01.886262   55203 cri.go:89] found id: ""
	I0914 18:02:01.886284   55203 logs.go:276] 0 containers: []
	W0914 18:02:01.886297   55203 logs.go:278] No container was found matching "etcd"
	I0914 18:02:01.886309   55203 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:02:01.886371   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:02:01.919915   55203 cri.go:89] found id: ""
	I0914 18:02:01.919942   55203 logs.go:276] 0 containers: []
	W0914 18:02:01.919951   55203 logs.go:278] No container was found matching "coredns"
	I0914 18:02:01.919959   55203 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:02:01.920021   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:02:01.953088   55203 cri.go:89] found id: ""
	I0914 18:02:01.953113   55203 logs.go:276] 0 containers: []
	W0914 18:02:01.953122   55203 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:02:01.953127   55203 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:02:01.953180   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:02:02.004555   55203 cri.go:89] found id: ""
	I0914 18:02:02.004582   55203 logs.go:276] 0 containers: []
	W0914 18:02:02.004592   55203 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:02:02.004598   55203 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:02:02.004657   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:02:02.035841   55203 cri.go:89] found id: ""
	I0914 18:02:02.035875   55203 logs.go:276] 0 containers: []
	W0914 18:02:02.035887   55203 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:02:02.035893   55203 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:02:02.035942   55203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:02:02.068522   55203 cri.go:89] found id: ""
	I0914 18:02:02.068562   55203 logs.go:276] 0 containers: []
	W0914 18:02:02.068574   55203 logs.go:278] No container was found matching "kindnet"
	I0914 18:02:02.068586   55203 logs.go:123] Gathering logs for kubelet ...
	I0914 18:02:02.068603   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:02:02.120331   55203 logs.go:123] Gathering logs for dmesg ...
	I0914 18:02:02.120371   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:02:02.133118   55203 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:02:02.133150   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:02:02.250339   55203 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:02:02.250363   55203 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:02:02.250378   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:02:02.353463   55203 logs.go:123] Gathering logs for container status ...
	I0914 18:02:02.353496   55203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:02:02.390226   55203 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:02:02.390289   55203 out.go:270] * 
	* 
	W0914 18:02:02.390342   55203 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:02:02.390356   55203 out.go:270] * 
	* 
	W0914 18:02:02.391282   55203 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:02:02.393941   55203 out.go:201] 
	W0914 18:02:02.394825   55203 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:02:02.394879   55203 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:02:02.394904   55203 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:02:02.396166   55203 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-470019
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-470019: (6.300851304s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-470019 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-470019 status --format={{.Host}}: exit status 7 (64.177296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.696938448s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-470019 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (77.238774ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-470019] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-470019
	    minikube start -p kubernetes-upgrade-470019 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4700192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-470019 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-470019 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.210812738s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-14 18:02:58.857787731 +0000 UTC m=+4753.379521697
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-470019 -n kubernetes-upgrade-470019
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-470019 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-470019 logs -n 25: (1.31050992s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-691590 sudo                                 | cilium-691590             | jenkins | v1.34.0 | 14 Sep 24 17:57 UTC |                     |
	|         | systemctl status crio --all                           |                           |         |         |                     |                     |
	|         | --full --no-pager                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-691590 sudo                                 | cilium-691590             | jenkins | v1.34.0 | 14 Sep 24 17:57 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-691590 sudo find                            | cilium-691590             | jenkins | v1.34.0 | 14 Sep 24 17:57 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-691590 sudo crio                            | cilium-691590             | jenkins | v1.34.0 | 14 Sep 24 17:57 UTC |                     |
	|         | config                                                |                           |         |         |                     |                     |
	| delete  | -p cilium-691590                                      | cilium-691590             | jenkins | v1.34.0 | 14 Sep 24 17:57 UTC | 14 Sep 24 17:57 UTC |
	| start   | -p stopped-upgrade-319416                             | minikube                  | jenkins | v1.26.0 | 14 Sep 24 17:57 UTC | 14 Sep 24 17:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-476980 ssh                               | cert-options-476980       | jenkins | v1.34.0 | 14 Sep 24 17:58 UTC | 14 Sep 24 17:58 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-476980 -- sudo                        | cert-options-476980       | jenkins | v1.34.0 | 14 Sep 24 17:58 UTC | 14 Sep 24 17:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-476980                                | cert-options-476980       | jenkins | v1.34.0 | 14 Sep 24 17:58 UTC | 14 Sep 24 17:58 UTC |
	| start   | -p old-k8s-version-556121                             | old-k8s-version-556121    | jenkins | v1.34.0 | 14 Sep 24 17:58 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-724454                             | cert-expiration-724454    | jenkins | v1.34.0 | 14 Sep 24 17:59 UTC | 14 Sep 24 17:59 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-319416 stop                           | minikube                  | jenkins | v1.26.0 | 14 Sep 24 17:59 UTC | 14 Sep 24 17:59 UTC |
	| start   | -p stopped-upgrade-319416                             | stopped-upgrade-319416    | jenkins | v1.34.0 | 14 Sep 24 17:59 UTC | 14 Sep 24 17:59 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-724454                             | cert-expiration-724454    | jenkins | v1.34.0 | 14 Sep 24 17:59 UTC | 14 Sep 24 17:59 UTC |
	| start   | -p no-preload-168587                                  | no-preload-168587         | jenkins | v1.34.0 | 14 Sep 24 17:59 UTC | 14 Sep 24 18:00 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-319416                             | stopped-upgrade-319416    | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                 | embed-certs-044534        | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587            | no-preload-168587         | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-168587                                  | no-preload-168587         | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534           | embed-certs-044534        | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                 | embed-certs-044534        | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                          | kubernetes-upgrade-470019 | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                          | kubernetes-upgrade-470019 | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                          | kubernetes-upgrade-470019 | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                          | kubernetes-upgrade-470019 | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:02:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:02:45.688083   61323 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:02:45.688196   61323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:02:45.688205   61323 out.go:358] Setting ErrFile to fd 2...
	I0914 18:02:45.688209   61323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:02:45.688393   61323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:02:45.688915   61323 out.go:352] Setting JSON to false
	I0914 18:02:45.689920   61323 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6310,"bootTime":1726330656,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:02:45.690018   61323 start.go:139] virtualization: kvm guest
	I0914 18:02:45.692157   61323 out.go:177] * [kubernetes-upgrade-470019] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:02:45.694126   61323 notify.go:220] Checking for updates...
	I0914 18:02:45.694146   61323 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:02:45.695545   61323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:02:45.696792   61323 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:02:45.697945   61323 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:02:45.699466   61323 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:02:45.700926   61323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:02:45.702554   61323 config.go:182] Loaded profile config "kubernetes-upgrade-470019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:02:45.702955   61323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:02:45.703017   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:02:45.718506   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0914 18:02:45.719052   61323 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:02:45.719698   61323 main.go:141] libmachine: Using API Version  1
	I0914 18:02:45.719721   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:02:45.720118   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:02:45.720343   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:45.720594   61323 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:02:45.720954   61323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:02:45.720997   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:02:45.736307   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I0914 18:02:45.736856   61323 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:02:45.737352   61323 main.go:141] libmachine: Using API Version  1
	I0914 18:02:45.737379   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:02:45.737689   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:02:45.737885   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:45.774380   61323 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:02:45.775589   61323 start.go:297] selected driver: kvm2
	I0914 18:02:45.775603   61323 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:02:45.775719   61323 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:02:45.776403   61323 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:02:45.776482   61323 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:02:45.792238   61323 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:02:45.792659   61323 cni.go:84] Creating CNI manager for ""
	I0914 18:02:45.792705   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:02:45.792749   61323 start.go:340] cluster config:
	{Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-470019 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:02:45.792854   61323 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:02:45.794800   61323 out.go:177] * Starting "kubernetes-upgrade-470019" primary control-plane node in "kubernetes-upgrade-470019" cluster
	I0914 18:02:45.796083   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:02:45.796130   61323 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:02:45.796144   61323 cache.go:56] Caching tarball of preloaded images
	I0914 18:02:45.796242   61323 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:02:45.796256   61323 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:02:45.796390   61323 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/config.json ...
	I0914 18:02:45.796612   61323 start.go:360] acquireMachinesLock for kubernetes-upgrade-470019: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:02:45.796660   61323 start.go:364] duration metric: took 27.149µs to acquireMachinesLock for "kubernetes-upgrade-470019"
	I0914 18:02:45.796681   61323 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:02:45.796692   61323 fix.go:54] fixHost starting: 
	I0914 18:02:45.797015   61323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:02:45.797052   61323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:02:45.812092   61323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I0914 18:02:45.812591   61323 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:02:45.813102   61323 main.go:141] libmachine: Using API Version  1
	I0914 18:02:45.813123   61323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:02:45.813444   61323 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:02:45.813642   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:45.813803   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetState
	I0914 18:02:45.815710   61323 fix.go:112] recreateIfNeeded on kubernetes-upgrade-470019: state=Running err=<nil>
	W0914 18:02:45.815735   61323 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:02:45.817985   61323 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-470019" VM ...
	I0914 18:02:45.819854   61323 machine.go:93] provisionDockerMachine start ...
	I0914 18:02:45.819887   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:45.820180   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:45.822918   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:45.823395   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:45.823418   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:45.823537   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:45.823756   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:45.823966   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:45.824101   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:45.824252   61323 main.go:141] libmachine: Using SSH client type: native
	I0914 18:02:45.824445   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 18:02:45.824463   61323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:02:45.934449   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470019
	
	I0914 18:02:45.934486   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 18:02:45.934715   61323 buildroot.go:166] provisioning hostname "kubernetes-upgrade-470019"
	I0914 18:02:45.934738   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 18:02:45.934907   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:45.937461   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:45.937863   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:45.937903   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:45.937983   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:45.938135   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:45.938301   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:45.938438   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:45.938602   61323 main.go:141] libmachine: Using SSH client type: native
	I0914 18:02:45.938818   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 18:02:45.938833   61323 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470019 && echo "kubernetes-upgrade-470019" | sudo tee /etc/hostname
	I0914 18:02:46.069526   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470019
	
	I0914 18:02:46.069552   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:46.072253   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.072573   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:46.072599   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.072763   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:46.072980   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:46.073146   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:46.073320   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:46.073475   61323 main.go:141] libmachine: Using SSH client type: native
	I0914 18:02:46.073647   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 18:02:46.073663   61323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470019/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:02:46.182689   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:02:46.182717   61323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:02:46.182736   61323 buildroot.go:174] setting up certificates
	I0914 18:02:46.182744   61323 provision.go:84] configureAuth start
	I0914 18:02:46.182753   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetMachineName
	I0914 18:02:46.183028   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 18:02:46.186025   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.186452   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:46.186480   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.186730   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:46.189500   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.189995   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:46.190027   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.190200   61323 provision.go:143] copyHostCerts
	I0914 18:02:46.190251   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:02:46.190260   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:02:46.190315   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:02:46.190418   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:02:46.190425   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:02:46.190446   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:02:46.190511   61323 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:02:46.190517   61323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:02:46.190536   61323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:02:46.190594   61323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470019 san=[127.0.0.1 192.168.72.202 kubernetes-upgrade-470019 localhost minikube]
	I0914 18:02:46.238582   61323 provision.go:177] copyRemoteCerts
	I0914 18:02:46.238638   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:02:46.238662   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:46.241590   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.241937   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:46.241958   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.242181   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:46.242378   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:46.242525   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:46.242688   61323 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 18:02:46.328595   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:02:46.357024   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0914 18:02:46.385632   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:02:46.420827   61323 provision.go:87] duration metric: took 238.070061ms to configureAuth
	I0914 18:02:46.420854   61323 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:02:46.421128   61323 config.go:182] Loaded profile config "kubernetes-upgrade-470019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:02:46.421256   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:46.424203   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.424511   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:46.424552   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:46.424779   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:46.424984   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:46.425148   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:46.425295   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:46.425450   61323 main.go:141] libmachine: Using SSH client type: native
	I0914 18:02:46.425643   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 18:02:46.425659   61323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:02:47.264208   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:02:47.264232   61323 machine.go:96] duration metric: took 1.444358893s to provisionDockerMachine
	I0914 18:02:47.264244   61323 start.go:293] postStartSetup for "kubernetes-upgrade-470019" (driver="kvm2")
	I0914 18:02:47.264253   61323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:02:47.264270   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:47.264581   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:02:47.264636   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:47.267237   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.267752   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:47.267780   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.267954   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:47.268142   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:47.268322   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:47.268436   61323 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 18:02:47.352183   61323 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:02:47.356389   61323 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:02:47.356414   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:02:47.356473   61323 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:02:47.356554   61323 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:02:47.356656   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:02:47.365760   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:02:47.390487   61323 start.go:296] duration metric: took 126.231052ms for postStartSetup
	I0914 18:02:47.390543   61323 fix.go:56] duration metric: took 1.593832875s for fixHost
	I0914 18:02:47.390564   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:47.393184   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.393548   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:47.393579   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.393722   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:47.393895   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:47.394018   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:47.394151   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:47.394292   61323 main.go:141] libmachine: Using SSH client type: native
	I0914 18:02:47.394494   61323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.202 22 <nil> <nil>}
	I0914 18:02:47.394510   61323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:02:47.510913   61323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726336967.501599875
	
	I0914 18:02:47.510939   61323 fix.go:216] guest clock: 1726336967.501599875
	I0914 18:02:47.510947   61323 fix.go:229] Guest: 2024-09-14 18:02:47.501599875 +0000 UTC Remote: 2024-09-14 18:02:47.390548421 +0000 UTC m=+1.739875523 (delta=111.051454ms)
	I0914 18:02:47.510966   61323 fix.go:200] guest clock delta is within tolerance: 111.051454ms
	I0914 18:02:47.510971   61323 start.go:83] releasing machines lock for "kubernetes-upgrade-470019", held for 1.714298712s
	I0914 18:02:47.510988   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:47.511210   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 18:02:47.514653   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.515206   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:47.515247   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.515434   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:47.515925   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:47.516079   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .DriverName
	I0914 18:02:47.516194   61323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:02:47.516237   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:47.516252   61323 ssh_runner.go:195] Run: cat /version.json
	I0914 18:02:47.516274   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHHostname
	I0914 18:02:47.518984   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.519225   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.519385   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:47.519408   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.519534   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:47.519570   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:47.519580   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:47.519774   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:47.519809   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHPort
	I0914 18:02:47.519932   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHKeyPath
	I0914 18:02:47.519952   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:47.520146   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetSSHUsername
	I0914 18:02:47.520153   61323 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 18:02:47.520280   61323 sshutil.go:53] new ssh client: &{IP:192.168.72.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/kubernetes-upgrade-470019/id_rsa Username:docker}
	I0914 18:02:47.680119   61323 ssh_runner.go:195] Run: systemctl --version
	I0914 18:02:47.729065   61323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:02:47.973890   61323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:02:47.980309   61323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:02:47.980396   61323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:02:47.989909   61323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 18:02:47.989945   61323 start.go:495] detecting cgroup driver to use...
	I0914 18:02:47.990022   61323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:02:48.007690   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:02:48.025817   61323 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:02:48.025886   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:02:48.053795   61323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:02:48.094897   61323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:02:48.287963   61323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:02:48.434940   61323 docker.go:233] disabling docker service ...
	I0914 18:02:48.435012   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:02:48.453716   61323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:02:48.473510   61323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:02:48.649426   61323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:02:48.819226   61323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:02:48.835209   61323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:02:48.854224   61323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:02:48.854308   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.870014   61323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:02:48.870110   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.883232   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.894438   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.908725   61323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:02:48.920103   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.937692   61323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.955308   61323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:02:48.965709   61323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:02:48.978480   61323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:02:48.988630   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:02:49.144217   61323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:02:49.476413   61323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:02:49.476515   61323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:02:49.481892   61323 start.go:563] Will wait 60s for crictl version
	I0914 18:02:49.481960   61323 ssh_runner.go:195] Run: which crictl
	I0914 18:02:49.485979   61323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:02:49.527644   61323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:02:49.527732   61323 ssh_runner.go:195] Run: crio --version
	I0914 18:02:49.582662   61323 ssh_runner.go:195] Run: crio --version
	I0914 18:02:49.646612   61323 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:02:49.647815   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) Calling .GetIP
	I0914 18:02:49.650733   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:49.651148   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:27:09", ip: ""} in network mk-kubernetes-upgrade-470019: {Iface:virbr4 ExpiryTime:2024-09-14 19:02:19 +0000 UTC Type:0 Mac:52:54:00:5c:27:09 Iaid: IPaddr:192.168.72.202 Prefix:24 Hostname:kubernetes-upgrade-470019 Clientid:01:52:54:00:5c:27:09}
	I0914 18:02:49.651178   61323 main.go:141] libmachine: (kubernetes-upgrade-470019) DBG | domain kubernetes-upgrade-470019 has defined IP address 192.168.72.202 and MAC address 52:54:00:5c:27:09 in network mk-kubernetes-upgrade-470019
	I0914 18:02:49.651394   61323 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 18:02:49.673434   61323 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:02:49.673542   61323 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:02:49.673591   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:02:49.839921   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:02:49.839950   61323 crio.go:433] Images already preloaded, skipping extraction
	I0914 18:02:49.839998   61323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:02:49.878323   61323 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:02:49.878353   61323 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:02:49.878360   61323 kubeadm.go:934] updating node { 192.168.72.202 8443 v1.31.1 crio true true} ...
	I0914 18:02:49.878486   61323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-470019 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:02:49.878580   61323 ssh_runner.go:195] Run: crio config
	I0914 18:02:49.936293   61323 cni.go:84] Creating CNI manager for ""
	I0914 18:02:49.936318   61323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:02:49.936329   61323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:02:49.936355   61323 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.202 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470019 NodeName:kubernetes-upgrade-470019 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:02:49.936598   61323 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-470019"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:02:49.936672   61323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:02:49.947448   61323 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:02:49.947531   61323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:02:49.957412   61323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0914 18:02:49.974471   61323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:02:49.992259   61323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:02:50.009121   61323 ssh_runner.go:195] Run: grep 192.168.72.202	control-plane.minikube.internal$ /etc/hosts
	I0914 18:02:50.012999   61323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:02:50.130094   61323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:02:50.144875   61323 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019 for IP: 192.168.72.202
	I0914 18:02:50.144898   61323 certs.go:194] generating shared ca certs ...
	I0914 18:02:50.144912   61323 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:02:50.145204   61323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:02:50.145313   61323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:02:50.145333   61323 certs.go:256] generating profile certs ...
	I0914 18:02:50.145442   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/client.key
	I0914 18:02:50.145521   61323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key.f0b2fee7
	I0914 18:02:50.145566   61323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.key
	I0914 18:02:50.145709   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:02:50.145806   61323 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:02:50.145824   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:02:50.145871   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:02:50.145909   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:02:50.145951   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:02:50.146019   61323 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:02:50.146748   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:02:50.170123   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:02:50.193429   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:02:50.218978   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:02:50.241758   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0914 18:02:50.265959   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:02:50.289392   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:02:50.313325   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/kubernetes-upgrade-470019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:02:50.338026   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:02:50.362483   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:02:50.384803   61323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:02:50.409725   61323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:02:50.426087   61323 ssh_runner.go:195] Run: openssl version
	I0914 18:02:50.431783   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:02:50.442262   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:02:50.446509   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:02:50.446569   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:02:50.451954   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:02:50.462075   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:02:50.472734   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:02:50.477356   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:02:50.477414   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:02:50.482935   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:02:50.492714   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:02:50.503076   61323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:02:50.507479   61323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:02:50.507528   61323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:02:50.512781   61323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:02:50.521642   61323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:02:50.526008   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:02:50.531486   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:02:50.537162   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:02:50.542729   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:02:50.548383   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:02:50.553834   61323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:02:50.559641   61323 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-470019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-470019 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.202 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:02:50.559730   61323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:02:50.559801   61323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:02:50.595796   61323 cri.go:89] found id: "ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2"
	I0914 18:02:50.595828   61323 cri.go:89] found id: "6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7"
	I0914 18:02:50.595833   61323 cri.go:89] found id: "355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c"
	I0914 18:02:50.595838   61323 cri.go:89] found id: "a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c"
	I0914 18:02:50.595842   61323 cri.go:89] found id: ""
	I0914 18:02:50.595892   61323 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.527789712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336979527765048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccaacf96-6149-4495-8e3f-a7e42c6b93ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.528363559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4c6cd84-daff-45c1-95fb-cd618fa54ce1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.528420722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4c6cd84-daff-45c1-95fb-cd618fa54ce1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.528605291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21355103f54d67f171017539030d7f74f7e03442123c9db8e942b4f464a1fb9e,PodSandboxId:1fffbbb723f5ec7761788cbe6b7b91d9cfe794b4c2c3bb21bb2562ef4ff05247,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726336972478891871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a974fa41eeea8113d023fac2aeb1c19f07b23f7815c219bd9e9a8aa42c5cef5,PodSandboxId:e4d4b0e8e49dcf8e84a07462a390b4f93ef5160fc37b0b5aa7bb2637cbf24226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726336972474805836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6281035085d05372397e395c50e7b8e425575d8c3fa98990fb1cf6d450ff448,PodSandboxId:e89a0b5588a3f346ecc43c2cf08e87a35adac9bb93e6cd2013b2e66ea7d557f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726336972449212447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f9d777d02b150898ab994a73c2d0f4ec5f67ed938fa387b4e6321415b08267,PodSandboxId:8d6fdd1a0c444637117ec2512ac6c099d10fa2294b22bedb775668a4de9612f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726336972462765567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2,PodSandboxId:37fea44c9eeaf869519687460d0c6644fb7f59e871fa3ba546e40b1798104000,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726336967849555699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7,PodSandboxId:52d21090db3e3f393d31d764acbffc7c30828582f40e9ee43bfd7d7e6e857535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726336967805103259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c,PodSandboxId:82e2e71b50bc559bfd5289417aaffb7d8ae9bce82d38cfd570a1ddfe315c0bdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726336967750282999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c,PodSandboxId:df8b8a0f8db17f46d4270e8e55d90a2f8385a769854830616fb17411e0ec9660,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726336967638986154,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4c6cd84-daff-45c1-95fb-cd618fa54ce1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.578589872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4138595-8efc-4045-bab6-2fb2e612e3ca name=/runtime.v1.RuntimeService/Version
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.578680004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4138595-8efc-4045-bab6-2fb2e612e3ca name=/runtime.v1.RuntimeService/Version
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.583328276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48488610-9462-487f-9b02-5fa6f5c9c702 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.583704362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336979583680944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48488610-9462-487f-9b02-5fa6f5c9c702 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.585047365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc277724-93e9-4850-8527-da04c74175e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.585106372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc277724-93e9-4850-8527-da04c74175e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.585283671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21355103f54d67f171017539030d7f74f7e03442123c9db8e942b4f464a1fb9e,PodSandboxId:1fffbbb723f5ec7761788cbe6b7b91d9cfe794b4c2c3bb21bb2562ef4ff05247,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726336972478891871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a974fa41eeea8113d023fac2aeb1c19f07b23f7815c219bd9e9a8aa42c5cef5,PodSandboxId:e4d4b0e8e49dcf8e84a07462a390b4f93ef5160fc37b0b5aa7bb2637cbf24226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726336972474805836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6281035085d05372397e395c50e7b8e425575d8c3fa98990fb1cf6d450ff448,PodSandboxId:e89a0b5588a3f346ecc43c2cf08e87a35adac9bb93e6cd2013b2e66ea7d557f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726336972449212447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f9d777d02b150898ab994a73c2d0f4ec5f67ed938fa387b4e6321415b08267,PodSandboxId:8d6fdd1a0c444637117ec2512ac6c099d10fa2294b22bedb775668a4de9612f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726336972462765567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2,PodSandboxId:37fea44c9eeaf869519687460d0c6644fb7f59e871fa3ba546e40b1798104000,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726336967849555699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7,PodSandboxId:52d21090db3e3f393d31d764acbffc7c30828582f40e9ee43bfd7d7e6e857535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726336967805103259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c,PodSandboxId:82e2e71b50bc559bfd5289417aaffb7d8ae9bce82d38cfd570a1ddfe315c0bdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726336967750282999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c,PodSandboxId:df8b8a0f8db17f46d4270e8e55d90a2f8385a769854830616fb17411e0ec9660,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726336967638986154,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc277724-93e9-4850-8527-da04c74175e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.640816998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8333d9f6-7976-4bbc-97b7-4e61e5add696 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.641144216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8333d9f6-7976-4bbc-97b7-4e61e5add696 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.646452339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37b2eb92-55fd-4ded-b0d8-71c680fcbb74 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.646980242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336979646813492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37b2eb92-55fd-4ded-b0d8-71c680fcbb74 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.647627204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=119754c6-2c15-4854-a1b5-16433757480f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.647677937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=119754c6-2c15-4854-a1b5-16433757480f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.647921478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21355103f54d67f171017539030d7f74f7e03442123c9db8e942b4f464a1fb9e,PodSandboxId:1fffbbb723f5ec7761788cbe6b7b91d9cfe794b4c2c3bb21bb2562ef4ff05247,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726336972478891871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a974fa41eeea8113d023fac2aeb1c19f07b23f7815c219bd9e9a8aa42c5cef5,PodSandboxId:e4d4b0e8e49dcf8e84a07462a390b4f93ef5160fc37b0b5aa7bb2637cbf24226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726336972474805836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6281035085d05372397e395c50e7b8e425575d8c3fa98990fb1cf6d450ff448,PodSandboxId:e89a0b5588a3f346ecc43c2cf08e87a35adac9bb93e6cd2013b2e66ea7d557f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726336972449212447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f9d777d02b150898ab994a73c2d0f4ec5f67ed938fa387b4e6321415b08267,PodSandboxId:8d6fdd1a0c444637117ec2512ac6c099d10fa2294b22bedb775668a4de9612f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726336972462765567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2,PodSandboxId:37fea44c9eeaf869519687460d0c6644fb7f59e871fa3ba546e40b1798104000,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726336967849555699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7,PodSandboxId:52d21090db3e3f393d31d764acbffc7c30828582f40e9ee43bfd7d7e6e857535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726336967805103259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c,PodSandboxId:82e2e71b50bc559bfd5289417aaffb7d8ae9bce82d38cfd570a1ddfe315c0bdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726336967750282999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c,PodSandboxId:df8b8a0f8db17f46d4270e8e55d90a2f8385a769854830616fb17411e0ec9660,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726336967638986154,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=119754c6-2c15-4854-a1b5-16433757480f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.693790335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a278d6c-2414-40d2-91df-52931b300690 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.693910311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a278d6c-2414-40d2-91df-52931b300690 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.695382567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67b0dfc3-5920-4825-b3b0-7fefcd9d355c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.695769923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726336979695723991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67b0dfc3-5920-4825-b3b0-7fefcd9d355c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.696493498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1925a6af-7744-4998-ad32-a65855dccbee name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.696544740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1925a6af-7744-4998-ad32-a65855dccbee name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:02:59 kubernetes-upgrade-470019 crio[1842]: time="2024-09-14 18:02:59.696713947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21355103f54d67f171017539030d7f74f7e03442123c9db8e942b4f464a1fb9e,PodSandboxId:1fffbbb723f5ec7761788cbe6b7b91d9cfe794b4c2c3bb21bb2562ef4ff05247,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726336972478891871,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a974fa41eeea8113d023fac2aeb1c19f07b23f7815c219bd9e9a8aa42c5cef5,PodSandboxId:e4d4b0e8e49dcf8e84a07462a390b4f93ef5160fc37b0b5aa7bb2637cbf24226,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726336972474805836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6281035085d05372397e395c50e7b8e425575d8c3fa98990fb1cf6d450ff448,PodSandboxId:e89a0b5588a3f346ecc43c2cf08e87a35adac9bb93e6cd2013b2e66ea7d557f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726336972449212447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f9d777d02b150898ab994a73c2d0f4ec5f67ed938fa387b4e6321415b08267,PodSandboxId:8d6fdd1a0c444637117ec2512ac6c099d10fa2294b22bedb775668a4de9612f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726336972462765567,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2,PodSandboxId:37fea44c9eeaf869519687460d0c6644fb7f59e871fa3ba546e40b1798104000,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726336967849555699,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15708a245bc75b48ed8f3bb85a4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7,PodSandboxId:52d21090db3e3f393d31d764acbffc7c30828582f40e9ee43bfd7d7e6e857535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726336967805103259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fabd1de24261d50972be47767535dc7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c,PodSandboxId:82e2e71b50bc559bfd5289417aaffb7d8ae9bce82d38cfd570a1ddfe315c0bdb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726336967750282999,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a8cd22a8baa53bed5b6f8c954cc504,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c,PodSandboxId:df8b8a0f8db17f46d4270e8e55d90a2f8385a769854830616fb17411e0ec9660,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726336967638986154,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-470019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54eee556e6c08fd8ab85801004e65b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1925a6af-7744-4998-ad32-a65855dccbee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	21355103f54d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   1fffbbb723f5e       kube-apiserver-kubernetes-upgrade-470019
	0a974fa41eeea       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   e4d4b0e8e49dc       kube-controller-manager-kubernetes-upgrade-470019
	17f9d777d02b1       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   8d6fdd1a0c444       kube-scheduler-kubernetes-upgrade-470019
	a6281035085d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   e89a0b5588a3f       etcd-kubernetes-upgrade-470019
	ef37197a0118c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   11 seconds ago      Exited              kube-scheduler            1                   37fea44c9eeaf       kube-scheduler-kubernetes-upgrade-470019
	6442f50e222c1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   11 seconds ago      Exited              kube-controller-manager   1                   52d21090db3e3       kube-controller-manager-kubernetes-upgrade-470019
	355ced2a24ec1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   12 seconds ago      Exited              kube-apiserver            1                   82e2e71b50bc5       kube-apiserver-kubernetes-upgrade-470019
	a05491726c335       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   12 seconds ago      Exited              etcd                      1                   df8b8a0f8db17       etcd-kubernetes-upgrade-470019
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-470019
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-470019
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:02:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-470019
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:02:56 +0000   Sat, 14 Sep 2024 18:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:02:56 +0000   Sat, 14 Sep 2024 18:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:02:56 +0000   Sat, 14 Sep 2024 18:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:02:56 +0000   Sat, 14 Sep 2024 18:02:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.202
	  Hostname:    kubernetes-upgrade-470019
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caaced22f8e64cbfa11034ade7d49660
	  System UUID:                caaced22-f8e6-4cbf-a110-34ade7d49660
	  Boot ID:                    0ec8f27a-8a1f-4822-ba76-0e65210cf1b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-470019                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16s
	  kube-system                 kube-apiserver-kubernetes-upgrade-470019             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-470019    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                550m (27%)  0 (0%)
	  memory             100Mi (4%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x8 over 25s)  kubelet          Node kubernetes-upgrade-470019 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 25s)  kubelet          Node kubernetes-upgrade-470019 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 25s)  kubelet          Node kubernetes-upgrade-470019 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-470019 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-470019 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet          Node kubernetes-upgrade-470019 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-470019 event: Registered Node kubernetes-upgrade-470019 in Controller
	
	
	==> dmesg <==
	[  +1.902627] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.533555] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000009] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.947886] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.059707] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067174] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.187786] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.144498] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.312527] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +3.875723] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +1.958141] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
	[  +0.065926] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.614588] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.082509] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.277258] systemd-fstab-generator[1759]: Ignoring "noauto" option for root device
	[  +0.176097] systemd-fstab-generator[1775]: Ignoring "noauto" option for root device
	[  +0.195666] systemd-fstab-generator[1791]: Ignoring "noauto" option for root device
	[  +0.171510] systemd-fstab-generator[1803]: Ignoring "noauto" option for root device
	[  +0.320145] systemd-fstab-generator[1832]: Ignoring "noauto" option for root device
	[  +1.018388] systemd-fstab-generator[2156]: Ignoring "noauto" option for root device
	[  +0.071562] kauditd_printk_skb: 225 callbacks suppressed
	[  +1.655330] systemd-fstab-generator[2277]: Ignoring "noauto" option for root device
	[  +6.198521] systemd-fstab-generator[2535]: Ignoring "noauto" option for root device
	[  +0.081243] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c] <==
	{"level":"info","ts":"2024-09-14T18:02:48.039758Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-14T18:02:48.053085Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2983a042717f6402","local-member-id":"24127cc5a9ff87b6","commit-index":300}
	{"level":"info","ts":"2024-09-14T18:02:48.053266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-14T18:02:48.053294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 became follower at term 2"}
	{"level":"info","ts":"2024-09-14T18:02:48.053314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 24127cc5a9ff87b6 [peers: [], term: 2, commit: 300, applied: 0, lastindex: 300, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-14T18:02:48.059021Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-14T18:02:48.070714Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":294}
	{"level":"info","ts":"2024-09-14T18:02:48.073926Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-14T18:02:48.081359Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"24127cc5a9ff87b6","timeout":"7s"}
	{"level":"info","ts":"2024-09-14T18:02:48.081730Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"24127cc5a9ff87b6"}
	{"level":"info","ts":"2024-09-14T18:02:48.081802Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"24127cc5a9ff87b6","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-14T18:02:48.085060Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:02:48.085144Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:02:48.085153Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:02:48.088129Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:02:48.083219Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-14T18:02:48.091169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 switched to configuration voters=(2599277123348694966)"}
	{"level":"info","ts":"2024-09-14T18:02:48.091269Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2983a042717f6402","local-member-id":"24127cc5a9ff87b6","added-peer-id":"24127cc5a9ff87b6","added-peer-peer-urls":["https://192.168.72.202:2380"]}
	{"level":"info","ts":"2024-09-14T18:02:48.091386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2983a042717f6402","local-member-id":"24127cc5a9ff87b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:02:48.091411Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:02:48.091724Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T18:02:48.092188Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"24127cc5a9ff87b6","initial-advertise-peer-urls":["https://192.168.72.202:2380"],"listen-peer-urls":["https://192.168.72.202:2380"],"advertise-client-urls":["https://192.168.72.202:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.202:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T18:02:48.092212Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T18:02:48.092268Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.202:2380"}
	{"level":"info","ts":"2024-09-14T18:02:48.092276Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.202:2380"}
	
	
	==> etcd [a6281035085d05372397e395c50e7b8e425575d8c3fa98990fb1cf6d450ff448] <==
	{"level":"info","ts":"2024-09-14T18:02:52.866190Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.202:2380"}
	{"level":"info","ts":"2024-09-14T18:02:52.871062Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.202:2380"}
	{"level":"info","ts":"2024-09-14T18:02:52.871288Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"24127cc5a9ff87b6","initial-advertise-peer-urls":["https://192.168.72.202:2380"],"listen-peer-urls":["https://192.168.72.202:2380"],"advertise-client-urls":["https://192.168.72.202:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.202:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T18:02:52.871332Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T18:02:52.871385Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:02:52.871431Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:02:52.872095Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2983a042717f6402","local-member-id":"24127cc5a9ff87b6","added-peer-id":"24127cc5a9ff87b6","added-peer-peer-urls":["https://192.168.72.202:2380"]}
	{"level":"info","ts":"2024-09-14T18:02:52.874041Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2983a042717f6402","local-member-id":"24127cc5a9ff87b6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:02:52.874107Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:02:54.620887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T18:02:54.621002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T18:02:54.621060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 received MsgPreVoteResp from 24127cc5a9ff87b6 at term 2"}
	{"level":"info","ts":"2024-09-14T18:02:54.621098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T18:02:54.621123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 received MsgVoteResp from 24127cc5a9ff87b6 at term 3"}
	{"level":"info","ts":"2024-09-14T18:02:54.621154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"24127cc5a9ff87b6 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T18:02:54.621179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 24127cc5a9ff87b6 elected leader 24127cc5a9ff87b6 at term 3"}
	{"level":"info","ts":"2024-09-14T18:02:54.625543Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"24127cc5a9ff87b6","local-member-attributes":"{Name:kubernetes-upgrade-470019 ClientURLs:[https://192.168.72.202:2379]}","request-path":"/0/members/24127cc5a9ff87b6/attributes","cluster-id":"2983a042717f6402","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T18:02:54.625641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:02:54.625680Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:02:54.626220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T18:02:54.626262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T18:02:54.626921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:02:54.626976Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:02:54.627810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T18:02:54.627810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.202:2379"}
	
	
	==> kernel <==
	 18:03:00 up 0 min,  0 users,  load average: 0.23, 0.07, 0.02
	Linux kubernetes-upgrade-470019 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21355103f54d67f171017539030d7f74f7e03442123c9db8e942b4f464a1fb9e] <==
	I0914 18:02:55.919266       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 18:02:55.919788       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 18:02:55.928755       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 18:02:55.931027       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 18:02:55.931185       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 18:02:55.931669       1 aggregator.go:171] initial CRD sync complete...
	I0914 18:02:55.931720       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 18:02:55.931745       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 18:02:55.931767       1 cache.go:39] Caches are synced for autoregister controller
	I0914 18:02:55.997897       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 18:02:56.006125       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 18:02:56.006158       1 policy_source.go:224] refreshing policies
	I0914 18:02:56.020156       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 18:02:56.020222       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 18:02:56.022390       1 shared_informer.go:320] Caches are synced for configmaps
	I0914 18:02:56.022438       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 18:02:56.024004       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0914 18:02:56.044989       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0914 18:02:56.827447       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 18:02:57.669461       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0914 18:02:57.679784       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0914 18:02:57.714777       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0914 18:02:57.856288       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 18:02:57.862181       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 18:03:00.108324       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c] <==
	I0914 18:02:48.112134       1 options.go:228] external host was not specified, using 192.168.72.202
	I0914 18:02:48.117553       1 server.go:142] Version: v1.31.1
	I0914 18:02:48.117610       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:02:49.086019       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0914 18:02:49.100037       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 18:02:49.102558       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0914 18:02:49.104069       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0914 18:02:49.104338       1 instance.go:232] Using reconciler: lease
	W0914 18:02:49.192288       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47402->127.0.0.1:2379: read: connection reset by peer"
	W0914 18:02:49.192288       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47398->127.0.0.1:2379: read: connection reset by peer"
	W0914 18:02:49.192374       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47396->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [0a974fa41eeea8113d023fac2aeb1c19f07b23f7815c219bd9e9a8aa42c5cef5] <==
	I0914 18:02:59.673565       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0914 18:02:59.674395       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-470019"
	I0914 18:02:59.680098       1 shared_informer.go:320] Caches are synced for PVC protection
	I0914 18:02:59.680329       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-470019" podCIDRs=["10.244.0.0/24"]
	I0914 18:02:59.680360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-470019"
	I0914 18:02:59.682037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-470019"
	I0914 18:02:59.685499       1 shared_informer.go:320] Caches are synced for stateful set
	I0914 18:02:59.689206       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0914 18:02:59.706038       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0914 18:02:59.726941       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 18:02:59.727048       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 18:02:59.727127       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 18:02:59.727250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 18:02:59.749733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-470019"
	I0914 18:02:59.803929       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0914 18:02:59.805004       1 shared_informer.go:320] Caches are synced for disruption
	I0914 18:02:59.807211       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0914 18:02:59.811559       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 18:02:59.820887       1 shared_informer.go:320] Caches are synced for job
	I0914 18:02:59.823471       1 shared_informer.go:320] Caches are synced for deployment
	I0914 18:02:59.838102       1 shared_informer.go:320] Caches are synced for cronjob
	I0914 18:02:59.854759       1 shared_informer.go:320] Caches are synced for attach detach
	I0914 18:02:59.860401       1 shared_informer.go:320] Caches are synced for persistent volume
	I0914 18:02:59.861642       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 18:02:59.867268       1 shared_informer.go:320] Caches are synced for PV protection
	
	
	==> kube-controller-manager [6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7] <==
	
	
	==> kube-scheduler [17f9d777d02b150898ab994a73c2d0f4ec5f67ed938fa387b4e6321415b08267] <==
	I0914 18:02:53.725709       1 serving.go:386] Generated self-signed cert in-memory
	W0914 18:02:55.842783       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 18:02:55.842944       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 18:02:55.842980       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 18:02:55.843052       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 18:02:55.939506       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 18:02:55.939643       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:02:55.942866       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 18:02:55.942913       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:02:55.943096       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 18:02:55.943185       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 18:02:56.043450       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2] <==
	
	
	==> kubelet <==
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:52.187314    2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-470019?timeout=10s\": dial tcp 192.168.72.202:8443: connect: connection refused" interval="400ms"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:52.367958    2284 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-470019"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:52.368744    2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.202:8443: connect: connection refused" node="kubernetes-upgrade-470019"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:52.439877    2284 scope.go:117] "RemoveContainer" containerID="ef37197a0118c9e46a0d16b6c73a432d03e3cb757a969b2ae2834849c7bb9ea2"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:52.440734    2284 scope.go:117] "RemoveContainer" containerID="a05491726c33541175f463e40197edc9cac6d7b8a60a1dc647bca63971785b8c"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:52.443348    2284 scope.go:117] "RemoveContainer" containerID="355ced2a24ec10f0d43f8508fd21dd01f7d1c67e342f2d1e74d90b49d678380c"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:52.444637    2284 scope.go:117] "RemoveContainer" containerID="6442f50e222c1da98b87413a2faa4aaa7c26009b79fa0bf17255914de7a252a7"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:52.588778    2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-470019?timeout=10s\": dial tcp 192.168.72.202:8443: connect: connection refused" interval="800ms"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:52.770670    2284 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-470019"
	Sep 14 18:02:52 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:52.771658    2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.202:8443: connect: connection refused" node="kubernetes-upgrade-470019"
	Sep 14 18:02:53 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:53.573580    2284 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-470019"
	Sep 14 18:02:55 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:55.969136    2284 apiserver.go:52] "Watching apiserver"
	Sep 14 18:02:55 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:55.981950    2284 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 14 18:02:56 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:56.080598    2284 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-470019"
	Sep 14 18:02:56 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:56.080807    2284 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-470019"
	Sep 14 18:02:59 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:59.730712    2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-470019" podStartSLOduration=3.730687167 podStartE2EDuration="3.730687167s" podCreationTimestamp="2024-09-14 18:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-14 18:02:59.697338307 +0000 UTC m=+7.830445586" watchObservedRunningTime="2024-09-14 18:02:59.730687167 +0000 UTC m=+7.863794448"
	Sep 14 18:02:59 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:59.736725    2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/92316d21-0187-4eb7-abd5-e1e0ade5e16d-tmp\") pod \"storage-provisioner\" (UID: \"92316d21-0187-4eb7-abd5-e1e0ade5e16d\") " pod="kube-system/storage-provisioner"
	Sep 14 18:02:59 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:02:59.736801    2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6m8\" (UniqueName: \"kubernetes.io/projected/92316d21-0187-4eb7-abd5-e1e0ade5e16d-kube-api-access-8w6m8\") pod \"storage-provisioner\" (UID: \"92316d21-0187-4eb7-abd5-e1e0ade5e16d\") " pod="kube-system/storage-provisioner"
	Sep 14 18:02:59 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:59.843204    2284 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 14 18:02:59 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:59.843255    2284 projected.go:194] Error preparing data for projected volume kube-api-access-8w6m8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 14 18:02:59 kubernetes-upgrade-470019 kubelet[2284]: E0914 18:02:59.843329    2284 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/92316d21-0187-4eb7-abd5-e1e0ade5e16d-kube-api-access-8w6m8 podName:92316d21-0187-4eb7-abd5-e1e0ade5e16d nodeName:}" failed. No retries permitted until 2024-09-14 18:03:00.34330328 +0000 UTC m=+8.476410540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8w6m8" (UniqueName: "kubernetes.io/projected/92316d21-0187-4eb7-abd5-e1e0ade5e16d-kube-api-access-8w6m8") pod "storage-provisioner" (UID: "92316d21-0187-4eb7-abd5-e1e0ade5e16d") : configmap "kube-root-ca.crt" not found
	Sep 14 18:03:00 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:03:00.339489    2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a-kube-proxy\") pod \"kube-proxy-wwzdb\" (UID: \"de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a\") " pod="kube-system/kube-proxy-wwzdb"
	Sep 14 18:03:00 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:03:00.339540    2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a-xtables-lock\") pod \"kube-proxy-wwzdb\" (UID: \"de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a\") " pod="kube-system/kube-proxy-wwzdb"
	Sep 14 18:03:00 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:03:00.339556    2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a-lib-modules\") pod \"kube-proxy-wwzdb\" (UID: \"de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a\") " pod="kube-system/kube-proxy-wwzdb"
	Sep 14 18:03:00 kubernetes-upgrade-470019 kubelet[2284]: I0914 18:03:00.339587    2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ct6\" (UniqueName: \"kubernetes.io/projected/de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a-kube-api-access-j6ct6\") pod \"kube-proxy-wwzdb\" (UID: \"de4a2022-f839-47f7-8ebf-8d1b7ddd6a7a\") " pod="kube-system/kube-proxy-wwzdb"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:02:59.188583   61520 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19643-8806/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-470019 -n kubernetes-upgrade-470019
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-470019 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-7c65d6cfc9-cs2v8 coredns-7c65d6cfc9-slw4m kube-proxy-wwzdb storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-470019 describe pod coredns-7c65d6cfc9-cs2v8 coredns-7c65d6cfc9-slw4m kube-proxy-wwzdb storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-470019 describe pod coredns-7c65d6cfc9-cs2v8 coredns-7c65d6cfc9-slw4m kube-proxy-wwzdb storage-provisioner: exit status 1 (77.186104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7c65d6cfc9-cs2v8" not found
	Error from server (NotFound): pods "coredns-7c65d6cfc9-slw4m" not found
	Error from server (NotFound): pods "kube-proxy-wwzdb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-470019 describe pod coredns-7c65d6cfc9-cs2v8 coredns-7c65d6cfc9-slw4m kube-proxy-wwzdb storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-470019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-470019
--- FAIL: TestKubernetesUpgrade (359.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (269.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-556121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0914 17:58:48.015412   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:59:04.946525   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-556121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m28.86159741s)

                                                
                                                
-- stdout --
	* [old-k8s-version-556121] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-556121" primary control-plane node in "old-k8s-version-556121" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:58:46.248253   58869 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:58:46.248569   58869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:58:46.248580   58869 out.go:358] Setting ErrFile to fd 2...
	I0914 17:58:46.248586   58869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:58:46.248804   58869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:58:46.249440   58869 out.go:352] Setting JSON to false
	I0914 17:58:46.250526   58869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6070,"bootTime":1726330656,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:58:46.250634   58869 start.go:139] virtualization: kvm guest
	I0914 17:58:46.253110   58869 out.go:177] * [old-k8s-version-556121] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:58:46.254623   58869 notify.go:220] Checking for updates...
	I0914 17:58:46.254639   58869 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:58:46.255872   58869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:58:46.257119   58869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:58:46.258535   58869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:58:46.259842   58869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:58:46.261127   58869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:58:46.262978   58869 config.go:182] Loaded profile config "cert-expiration-724454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:58:46.263116   58869 config.go:182] Loaded profile config "kubernetes-upgrade-470019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 17:58:46.263264   58869 config.go:182] Loaded profile config "stopped-upgrade-319416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0914 17:58:46.263378   58869 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:58:46.303121   58869 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 17:58:46.304469   58869 start.go:297] selected driver: kvm2
	I0914 17:58:46.304484   58869 start.go:901] validating driver "kvm2" against <nil>
	I0914 17:58:46.304500   58869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:58:46.305205   58869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:58:46.305288   58869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 17:58:46.321191   58869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 17:58:46.321249   58869 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:58:46.321540   58869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:58:46.321587   58869 cni.go:84] Creating CNI manager for ""
	I0914 17:58:46.321644   58869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:58:46.321661   58869 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 17:58:46.321740   58869 start.go:340] cluster config:
	{Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:58:46.321893   58869 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:58:46.324006   58869 out.go:177] * Starting "old-k8s-version-556121" primary control-plane node in "old-k8s-version-556121" cluster
	I0914 17:58:46.325279   58869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 17:58:46.325339   58869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 17:58:46.325364   58869 cache.go:56] Caching tarball of preloaded images
	I0914 17:58:46.325482   58869 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 17:58:46.325497   58869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 17:58:46.325646   58869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 17:58:46.325686   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json: {Name:mka2de300a20de90b964d8913e88dc862d0ec6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:58:46.325904   58869 start.go:360] acquireMachinesLock for old-k8s-version-556121: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 17:58:46.325958   58869 start.go:364] duration metric: took 29.115µs to acquireMachinesLock for "old-k8s-version-556121"
	I0914 17:58:46.325985   58869 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 17:58:46.326073   58869 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 17:58:46.328574   58869 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 17:58:46.328781   58869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:58:46.328837   58869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:58:46.344454   58869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I0914 17:58:46.344944   58869 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:58:46.345633   58869 main.go:141] libmachine: Using API Version  1
	I0914 17:58:46.345663   58869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:58:46.346097   58869 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:58:46.346344   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 17:58:46.346525   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:58:46.346691   58869 start.go:159] libmachine.API.Create for "old-k8s-version-556121" (driver="kvm2")
	I0914 17:58:46.346725   58869 client.go:168] LocalClient.Create starting
	I0914 17:58:46.346754   58869 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 17:58:46.346789   58869 main.go:141] libmachine: Decoding PEM data...
	I0914 17:58:46.346806   58869 main.go:141] libmachine: Parsing certificate...
	I0914 17:58:46.346855   58869 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 17:58:46.346874   58869 main.go:141] libmachine: Decoding PEM data...
	I0914 17:58:46.346887   58869 main.go:141] libmachine: Parsing certificate...
	I0914 17:58:46.346906   58869 main.go:141] libmachine: Running pre-create checks...
	I0914 17:58:46.346920   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .PreCreateCheck
	I0914 17:58:46.347348   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 17:58:46.347710   58869 main.go:141] libmachine: Creating machine...
	I0914 17:58:46.347723   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .Create
	I0914 17:58:46.347855   58869 main.go:141] libmachine: (old-k8s-version-556121) Creating KVM machine...
	I0914 17:58:46.349021   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found existing default KVM network
	I0914 17:58:46.351684   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.351485   58892 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0914 17:58:46.352444   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.352364   58892 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:51:2b:d7} reservation:<nil>}
	I0914 17:58:46.353250   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.353195   58892 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:2b:09:15} reservation:<nil>}
	I0914 17:58:46.353979   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.353919   58892 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:eb:94} reservation:<nil>}
	I0914 17:58:46.355201   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.355080   58892 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00042c610}
	I0914 17:58:46.355234   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | created network xml: 
	I0914 17:58:46.355247   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | <network>
	I0914 17:58:46.355263   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |   <name>mk-old-k8s-version-556121</name>
	I0914 17:58:46.355276   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |   <dns enable='no'/>
	I0914 17:58:46.355285   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |   
	I0914 17:58:46.355302   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0914 17:58:46.355322   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |     <dhcp>
	I0914 17:58:46.355347   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0914 17:58:46.355364   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |     </dhcp>
	I0914 17:58:46.355374   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |   </ip>
	I0914 17:58:46.355381   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG |   
	I0914 17:58:46.355390   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | </network>
	I0914 17:58:46.355399   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | 
	I0914 17:58:46.361028   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | trying to create private KVM network mk-old-k8s-version-556121 192.168.83.0/24...
	I0914 17:58:46.433621   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | private KVM network mk-old-k8s-version-556121 192.168.83.0/24 created
	I0914 17:58:46.433699   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121 ...
	I0914 17:58:46.433774   58869 main.go:141] libmachine: (old-k8s-version-556121) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 17:58:46.433888   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.433578   58892 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:58:46.434532   58869 main.go:141] libmachine: (old-k8s-version-556121) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 17:58:46.685468   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.685300   58892 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa...
	I0914 17:58:46.826926   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.826790   58892 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/old-k8s-version-556121.rawdisk...
	I0914 17:58:46.826958   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Writing magic tar header
	I0914 17:58:46.826976   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Writing SSH key tar header
	I0914 17:58:46.826990   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:46.826893   58892 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121 ...
	I0914 17:58:46.827005   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121
	I0914 17:58:46.827019   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121 (perms=drwx------)
	I0914 17:58:46.827033   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 17:58:46.827046   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 17:58:46.827058   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 17:58:46.827073   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 17:58:46.827106   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 17:58:46.827119   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:58:46.827129   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 17:58:46.827138   58869 main.go:141] libmachine: (old-k8s-version-556121) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 17:58:46.827148   58869 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 17:58:46.827158   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 17:58:46.827167   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home/jenkins
	I0914 17:58:46.827181   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Checking permissions on dir: /home
	I0914 17:58:46.827191   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Skipping /home - not owner
	I0914 17:58:46.828505   58869 main.go:141] libmachine: (old-k8s-version-556121) define libvirt domain using xml: 
	I0914 17:58:46.828526   58869 main.go:141] libmachine: (old-k8s-version-556121) <domain type='kvm'>
	I0914 17:58:46.828535   58869 main.go:141] libmachine: (old-k8s-version-556121)   <name>old-k8s-version-556121</name>
	I0914 17:58:46.828544   58869 main.go:141] libmachine: (old-k8s-version-556121)   <memory unit='MiB'>2200</memory>
	I0914 17:58:46.828550   58869 main.go:141] libmachine: (old-k8s-version-556121)   <vcpu>2</vcpu>
	I0914 17:58:46.828556   58869 main.go:141] libmachine: (old-k8s-version-556121)   <features>
	I0914 17:58:46.828563   58869 main.go:141] libmachine: (old-k8s-version-556121)     <acpi/>
	I0914 17:58:46.828569   58869 main.go:141] libmachine: (old-k8s-version-556121)     <apic/>
	I0914 17:58:46.828576   58869 main.go:141] libmachine: (old-k8s-version-556121)     <pae/>
	I0914 17:58:46.828584   58869 main.go:141] libmachine: (old-k8s-version-556121)     
	I0914 17:58:46.828595   58869 main.go:141] libmachine: (old-k8s-version-556121)   </features>
	I0914 17:58:46.828603   58869 main.go:141] libmachine: (old-k8s-version-556121)   <cpu mode='host-passthrough'>
	I0914 17:58:46.828612   58869 main.go:141] libmachine: (old-k8s-version-556121)   
	I0914 17:58:46.828624   58869 main.go:141] libmachine: (old-k8s-version-556121)   </cpu>
	I0914 17:58:46.828653   58869 main.go:141] libmachine: (old-k8s-version-556121)   <os>
	I0914 17:58:46.828677   58869 main.go:141] libmachine: (old-k8s-version-556121)     <type>hvm</type>
	I0914 17:58:46.828686   58869 main.go:141] libmachine: (old-k8s-version-556121)     <boot dev='cdrom'/>
	I0914 17:58:46.828696   58869 main.go:141] libmachine: (old-k8s-version-556121)     <boot dev='hd'/>
	I0914 17:58:46.828705   58869 main.go:141] libmachine: (old-k8s-version-556121)     <bootmenu enable='no'/>
	I0914 17:58:46.828713   58869 main.go:141] libmachine: (old-k8s-version-556121)   </os>
	I0914 17:58:46.828720   58869 main.go:141] libmachine: (old-k8s-version-556121)   <devices>
	I0914 17:58:46.828730   58869 main.go:141] libmachine: (old-k8s-version-556121)     <disk type='file' device='cdrom'>
	I0914 17:58:46.828745   58869 main.go:141] libmachine: (old-k8s-version-556121)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/boot2docker.iso'/>
	I0914 17:58:46.828759   58869 main.go:141] libmachine: (old-k8s-version-556121)       <target dev='hdc' bus='scsi'/>
	I0914 17:58:46.828847   58869 main.go:141] libmachine: (old-k8s-version-556121)       <readonly/>
	I0914 17:58:46.828883   58869 main.go:141] libmachine: (old-k8s-version-556121)     </disk>
	I0914 17:58:46.828906   58869 main.go:141] libmachine: (old-k8s-version-556121)     <disk type='file' device='disk'>
	I0914 17:58:46.828926   58869 main.go:141] libmachine: (old-k8s-version-556121)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 17:58:46.828966   58869 main.go:141] libmachine: (old-k8s-version-556121)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/old-k8s-version-556121.rawdisk'/>
	I0914 17:58:46.828978   58869 main.go:141] libmachine: (old-k8s-version-556121)       <target dev='hda' bus='virtio'/>
	I0914 17:58:46.828986   58869 main.go:141] libmachine: (old-k8s-version-556121)     </disk>
	I0914 17:58:46.829001   58869 main.go:141] libmachine: (old-k8s-version-556121)     <interface type='network'>
	I0914 17:58:46.829014   58869 main.go:141] libmachine: (old-k8s-version-556121)       <source network='mk-old-k8s-version-556121'/>
	I0914 17:58:46.829024   58869 main.go:141] libmachine: (old-k8s-version-556121)       <model type='virtio'/>
	I0914 17:58:46.829035   58869 main.go:141] libmachine: (old-k8s-version-556121)     </interface>
	I0914 17:58:46.829045   58869 main.go:141] libmachine: (old-k8s-version-556121)     <interface type='network'>
	I0914 17:58:46.829055   58869 main.go:141] libmachine: (old-k8s-version-556121)       <source network='default'/>
	I0914 17:58:46.829070   58869 main.go:141] libmachine: (old-k8s-version-556121)       <model type='virtio'/>
	I0914 17:58:46.829082   58869 main.go:141] libmachine: (old-k8s-version-556121)     </interface>
	I0914 17:58:46.829096   58869 main.go:141] libmachine: (old-k8s-version-556121)     <serial type='pty'>
	I0914 17:58:46.829117   58869 main.go:141] libmachine: (old-k8s-version-556121)       <target port='0'/>
	I0914 17:58:46.829129   58869 main.go:141] libmachine: (old-k8s-version-556121)     </serial>
	I0914 17:58:46.829139   58869 main.go:141] libmachine: (old-k8s-version-556121)     <console type='pty'>
	I0914 17:58:46.829148   58869 main.go:141] libmachine: (old-k8s-version-556121)       <target type='serial' port='0'/>
	I0914 17:58:46.829159   58869 main.go:141] libmachine: (old-k8s-version-556121)     </console>
	I0914 17:58:46.829166   58869 main.go:141] libmachine: (old-k8s-version-556121)     <rng model='virtio'>
	I0914 17:58:46.829181   58869 main.go:141] libmachine: (old-k8s-version-556121)       <backend model='random'>/dev/random</backend>
	I0914 17:58:46.829200   58869 main.go:141] libmachine: (old-k8s-version-556121)     </rng>
	I0914 17:58:46.829225   58869 main.go:141] libmachine: (old-k8s-version-556121)     
	I0914 17:58:46.829238   58869 main.go:141] libmachine: (old-k8s-version-556121)     
	I0914 17:58:46.829249   58869 main.go:141] libmachine: (old-k8s-version-556121)   </devices>
	I0914 17:58:46.829261   58869 main.go:141] libmachine: (old-k8s-version-556121) </domain>
	I0914 17:58:46.829271   58869 main.go:141] libmachine: (old-k8s-version-556121) 
	I0914 17:58:46.833694   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:e6:18:3c in network default
	I0914 17:58:46.834361   58869 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 17:58:46.834385   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:46.834962   58869 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 17:58:46.835309   58869 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 17:58:46.836020   58869 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 17:58:46.836723   58869 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 17:58:48.091663   58869 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 17:58:48.092646   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:48.093187   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:48.093211   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:48.093158   58892 retry.go:31] will retry after 211.891454ms: waiting for machine to come up
	I0914 17:58:48.306628   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:48.307195   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:48.307226   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:48.307169   58892 retry.go:31] will retry after 370.208351ms: waiting for machine to come up
	I0914 17:58:48.678687   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:48.679267   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:48.679299   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:48.679213   58892 retry.go:31] will retry after 334.144923ms: waiting for machine to come up
	I0914 17:58:49.014836   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:49.015347   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:49.015374   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:49.015280   58892 retry.go:31] will retry after 548.247462ms: waiting for machine to come up
	I0914 17:58:49.565051   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:49.565492   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:49.565521   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:49.565433   58892 retry.go:31] will retry after 595.859425ms: waiting for machine to come up
	I0914 17:58:50.163319   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:50.164015   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:50.164042   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:50.163957   58892 retry.go:31] will retry after 906.266357ms: waiting for machine to come up
	I0914 17:58:51.071872   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:51.072323   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:51.072355   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:51.072279   58892 retry.go:31] will retry after 758.956539ms: waiting for machine to come up
	I0914 17:58:51.832720   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:51.833298   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:51.833328   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:51.833270   58892 retry.go:31] will retry after 1.07773386s: waiting for machine to come up
	I0914 17:58:52.912835   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:52.913490   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:52.913519   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:52.913422   58892 retry.go:31] will retry after 1.489546928s: waiting for machine to come up
	I0914 17:58:54.405292   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:54.405841   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:54.405862   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:54.405794   58892 retry.go:31] will retry after 1.518422746s: waiting for machine to come up
	I0914 17:58:55.926086   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:55.926695   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:55.926753   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:55.926653   58892 retry.go:31] will retry after 2.571567034s: waiting for machine to come up
	I0914 17:58:58.500141   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:58:58.500735   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:58:58.500759   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:58:58.500681   58892 retry.go:31] will retry after 2.475374693s: waiting for machine to come up
	I0914 17:59:00.977288   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:00.977811   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:59:00.977838   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:59:00.977776   58892 retry.go:31] will retry after 2.747111234s: waiting for machine to come up
	I0914 17:59:03.728332   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:03.728899   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 17:59:03.728919   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 17:59:03.728865   58892 retry.go:31] will retry after 4.748088658s: waiting for machine to come up
	I0914 17:59:08.479466   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.479988   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.480006   58869 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 17:59:08.480015   58869 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 17:59:08.480540   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121
	I0914 17:59:08.557991   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 17:59:08.558020   58869 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 17:59:08.558031   58869 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 17:59:08.560719   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.561142   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:08.561176   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.561275   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 17:59:08.561297   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 17:59:08.561340   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 17:59:08.561355   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 17:59:08.561370   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 17:59:08.682256   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 17:59:08.682548   58869 main.go:141] libmachine: (old-k8s-version-556121) KVM machine creation complete!
	I0914 17:59:08.682850   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 17:59:08.683432   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:08.683655   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:08.683846   58869 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 17:59:08.683863   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 17:59:08.685220   58869 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 17:59:08.685236   58869 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 17:59:08.685243   58869 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 17:59:08.685251   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:08.688277   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.688766   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:08.688798   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.688944   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:08.689092   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:08.689278   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:08.689408   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:08.689577   58869 main.go:141] libmachine: Using SSH client type: native
	I0914 17:59:08.689839   58869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 17:59:08.689858   58869 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 17:59:08.785343   58869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:59:08.785372   58869 main.go:141] libmachine: Detecting the provisioner...
	I0914 17:59:08.785381   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:08.788725   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.789090   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:08.789128   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.789325   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:08.789536   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:08.789723   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:08.789858   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:08.790021   58869 main.go:141] libmachine: Using SSH client type: native
	I0914 17:59:08.790225   58869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 17:59:08.790239   58869 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 17:59:08.890861   58869 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 17:59:08.891001   58869 main.go:141] libmachine: found compatible host: buildroot
	I0914 17:59:08.891015   58869 main.go:141] libmachine: Provisioning with buildroot...
	I0914 17:59:08.891026   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 17:59:08.891300   58869 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 17:59:08.891330   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 17:59:08.891533   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:08.894829   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.895257   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:08.895288   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:08.895459   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:08.895633   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:08.895789   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:08.895906   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:08.896046   58869 main.go:141] libmachine: Using SSH client type: native
	I0914 17:59:08.896245   58869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 17:59:08.896263   58869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 17:59:09.015912   58869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 17:59:09.015947   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:09.019264   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.019623   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.019653   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.019852   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:09.020030   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.020186   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.020401   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:09.020584   58869 main.go:141] libmachine: Using SSH client type: native
	I0914 17:59:09.020755   58869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 17:59:09.020770   58869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:59:09.131379   58869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:59:09.131415   58869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 17:59:09.131454   58869 buildroot.go:174] setting up certificates
	I0914 17:59:09.131464   58869 provision.go:84] configureAuth start
	I0914 17:59:09.131473   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 17:59:09.131778   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 17:59:09.134689   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.135145   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.135200   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.135382   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:09.137829   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.138266   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.138294   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.138481   58869 provision.go:143] copyHostCerts
	I0914 17:59:09.138564   58869 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 17:59:09.138590   58869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 17:59:09.138666   58869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 17:59:09.138795   58869 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 17:59:09.138808   58869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 17:59:09.138843   58869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 17:59:09.138922   58869 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 17:59:09.138934   58869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 17:59:09.138965   58869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 17:59:09.139045   58869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 17:59:09.365868   58869 provision.go:177] copyRemoteCerts
	I0914 17:59:09.365925   58869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:59:09.365950   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:09.369043   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.369426   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.369456   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.369621   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:09.369796   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.369922   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:09.370098   58869 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 17:59:09.452767   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:59:09.476935   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 17:59:09.502000   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:59:09.531885   58869 provision.go:87] duration metric: took 400.408175ms to configureAuth
	I0914 17:59:09.531934   58869 buildroot.go:189] setting minikube options for container-runtime
	I0914 17:59:09.532173   58869 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 17:59:09.532278   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:09.534906   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.535220   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.535245   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.535413   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:09.535587   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.535716   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.535823   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:09.535939   58869 main.go:141] libmachine: Using SSH client type: native
	I0914 17:59:09.536105   58869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 17:59:09.536122   58869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 17:59:09.763699   58869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 17:59:09.763729   58869 main.go:141] libmachine: Checking connection to Docker...
	I0914 17:59:09.763741   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetURL
	I0914 17:59:09.765114   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using libvirt version 6000000
	I0914 17:59:09.767825   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.768231   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.768289   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.768426   58869 main.go:141] libmachine: Docker is up and running!
	I0914 17:59:09.768437   58869 main.go:141] libmachine: Reticulating splines...
	I0914 17:59:09.768443   58869 client.go:171] duration metric: took 23.42170954s to LocalClient.Create
	I0914 17:59:09.768467   58869 start.go:167] duration metric: took 23.421776036s to libmachine.API.Create "old-k8s-version-556121"
	I0914 17:59:09.768480   58869 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 17:59:09.768504   58869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:59:09.768524   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:09.768845   58869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:59:09.768878   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:09.771857   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.772263   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.772293   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.772525   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:09.772736   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.772887   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:09.773010   58869 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 17:59:09.852816   58869 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:59:09.857434   58869 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 17:59:09.857466   58869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 17:59:09.857545   58869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 17:59:09.857659   58869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 17:59:09.857780   58869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 17:59:09.868729   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:59:09.897753   58869 start.go:296] duration metric: took 129.259758ms for postStartSetup
	I0914 17:59:09.897798   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 17:59:09.898457   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 17:59:09.901262   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.901624   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.901658   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.902016   58869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 17:59:09.902266   58869 start.go:128] duration metric: took 23.576180248s to createHost
	I0914 17:59:09.902303   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:09.904943   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.905418   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:09.905562   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:09.905633   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:09.905838   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.906024   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:09.906186   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:09.906361   58869 main.go:141] libmachine: Using SSH client type: native
	I0914 17:59:09.906548   58869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 17:59:09.906561   58869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 17:59:10.010627   58869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726336749.969032342
	
	I0914 17:59:10.010653   58869 fix.go:216] guest clock: 1726336749.969032342
	I0914 17:59:10.010662   58869 fix.go:229] Guest: 2024-09-14 17:59:09.969032342 +0000 UTC Remote: 2024-09-14 17:59:09.902287679 +0000 UTC m=+23.697482421 (delta=66.744663ms)
	I0914 17:59:10.010688   58869 fix.go:200] guest clock delta is within tolerance: 66.744663ms
	I0914 17:59:10.010699   58869 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 23.684728144s
	I0914 17:59:10.010722   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:10.010968   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 17:59:10.013680   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:10.014031   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:10.014056   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:10.014226   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:10.014854   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:10.015048   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 17:59:10.015169   58869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:59:10.015214   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:10.015336   58869 ssh_runner.go:195] Run: cat /version.json
	I0914 17:59:10.015365   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 17:59:10.017973   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:10.018225   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:10.018406   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:10.018454   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:10.018585   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:10.018742   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:10.018767   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:10.018814   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:10.018930   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:10.018948   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 17:59:10.019084   58869 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 17:59:10.019114   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 17:59:10.019238   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 17:59:10.019463   58869 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 17:59:10.129894   58869 ssh_runner.go:195] Run: systemctl --version
	I0914 17:59:10.136296   58869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 17:59:10.299797   58869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 17:59:10.306272   58869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 17:59:10.306356   58869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:59:10.322468   58869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 17:59:10.322497   58869 start.go:495] detecting cgroup driver to use...
	I0914 17:59:10.322575   58869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 17:59:10.338520   58869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 17:59:10.352392   58869 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:59:10.352473   58869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:59:10.366398   58869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:59:10.380913   58869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:59:10.505790   58869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:59:10.660393   58869 docker.go:233] disabling docker service ...
	I0914 17:59:10.660494   58869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:59:10.677139   58869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:59:10.690467   58869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:59:10.848041   58869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:59:10.986028   58869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:59:11.000510   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:59:11.023864   58869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 17:59:11.023928   58869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:59:11.035815   58869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 17:59:11.035876   58869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:59:11.050311   58869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:59:11.062231   58869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 17:59:11.074146   58869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:59:11.088951   58869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:59:11.100109   58869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 17:59:11.100174   58869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 17:59:11.114880   58869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:59:11.127665   58869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:59:11.268970   58869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 17:59:11.375404   58869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 17:59:11.375472   58869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 17:59:11.385779   58869 start.go:563] Will wait 60s for crictl version
	I0914 17:59:11.385844   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:11.389649   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:59:11.453485   58869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 17:59:11.453568   58869 ssh_runner.go:195] Run: crio --version
	I0914 17:59:11.483702   58869 ssh_runner.go:195] Run: crio --version
	I0914 17:59:11.522858   58869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 17:59:11.523954   58869 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 17:59:11.526871   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:11.527237   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 18:59:00 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 17:59:11.527274   58869 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 17:59:11.527500   58869 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 17:59:11.532395   58869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:59:11.545988   58869 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:59:11.546142   58869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 17:59:11.546240   58869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:59:11.584862   58869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 17:59:11.584925   58869 ssh_runner.go:195] Run: which lz4
	I0914 17:59:11.588935   58869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 17:59:11.593198   58869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 17:59:11.593238   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 17:59:13.102261   58869 crio.go:462] duration metric: took 1.51337867s to copy over tarball
	I0914 17:59:13.102360   58869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 17:59:15.596749   58869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49436094s)
	I0914 17:59:15.596774   58869 crio.go:469] duration metric: took 2.49447977s to extract the tarball
	I0914 17:59:15.596781   58869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 17:59:15.637798   58869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:59:15.681257   58869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 17:59:15.681283   58869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 17:59:15.681346   58869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:59:15.681405   58869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:15.681402   58869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:15.681420   58869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:15.681376   58869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:15.681441   58869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:15.681506   58869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 17:59:15.681523   58869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 17:59:15.682932   58869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 17:59:15.682962   58869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 17:59:15.682967   58869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:15.682962   58869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:59:15.682992   58869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:15.683027   58869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:15.683043   58869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:15.682962   58869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:15.877343   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:15.881659   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 17:59:15.897181   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:15.906088   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:15.907056   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:15.915185   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 17:59:15.926471   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:15.939167   58869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 17:59:15.939217   58869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:15.939256   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:15.991042   58869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 17:59:15.991079   58869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 17:59:15.991118   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:16.012531   58869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 17:59:16.012589   58869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:16.012658   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:16.024611   58869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 17:59:16.024659   58869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:16.024713   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:16.041899   58869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 17:59:16.041950   58869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:16.042026   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:16.054984   58869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 17:59:16.055030   58869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 17:59:16.055076   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:16.059846   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:16.059884   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 17:59:16.059857   58869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 17:59:16.059929   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:16.059936   58869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:16.059988   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:16.060029   58869 ssh_runner.go:195] Run: which crictl
	I0914 17:59:16.060030   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:16.061788   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 17:59:16.198832   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:16.198876   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 17:59:16.198886   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:16.198892   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:16.198950   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:16.199039   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:16.199052   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 17:59:16.334414   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:16.334462   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 17:59:16.334463   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 17:59:16.334590   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 17:59:16.334618   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 17:59:16.334677   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 17:59:16.341383   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 17:59:16.464511   58869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 17:59:16.464582   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 17:59:16.475029   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 17:59:16.486652   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 17:59:16.494911   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 17:59:16.494940   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 17:59:16.509407   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 17:59:16.515409   58869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 17:59:16.911198   58869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:59:17.052636   58869 cache_images.go:92] duration metric: took 1.371335838s to LoadCachedImages
	W0914 17:59:17.052735   58869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0914 17:59:17.052754   58869 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 17:59:17.052885   58869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:59:17.052976   58869 ssh_runner.go:195] Run: crio config
	I0914 17:59:17.099243   58869 cni.go:84] Creating CNI manager for ""
	I0914 17:59:17.099269   58869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 17:59:17.099283   58869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:59:17.099305   58869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 17:59:17.099450   58869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:59:17.099522   58869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 17:59:17.109589   58869 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:59:17.109656   58869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 17:59:17.118807   58869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 17:59:17.134723   58869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:59:17.150395   58869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 17:59:17.165952   58869 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 17:59:17.169509   58869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:59:17.180996   58869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:59:17.295279   58869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:59:17.312345   58869 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 17:59:17.312372   58869 certs.go:194] generating shared ca certs ...
	I0914 17:59:17.312393   58869 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.312576   58869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 17:59:17.312632   58869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 17:59:17.312644   58869 certs.go:256] generating profile certs ...
	I0914 17:59:17.312699   58869 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 17:59:17.312713   58869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt with IP's: []
	I0914 17:59:17.632692   58869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt ...
	I0914 17:59:17.632725   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt: {Name:mk66d5dee7befcd4473acc1ed0432b5ce0c6ea84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.632891   58869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key ...
	I0914 17:59:17.632904   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key: {Name:mk3111b842fe6f58b384cbc8c46298e4bb083a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.632975   58869 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 17:59:17.632991   58869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt.faf839ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.80]
	I0914 17:59:17.711094   58869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt.faf839ab ...
	I0914 17:59:17.711130   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt.faf839ab: {Name:mk70e7769fce18b4af7eb7ef0fbec926babf5c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.747701   58869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab ...
	I0914 17:59:17.747742   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab: {Name:mk321b0aa90ba08b3613d1c74a3903d985c53dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.747905   58869 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt.faf839ab -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt
	I0914 17:59:17.748006   58869 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key
	I0914 17:59:17.748078   58869 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 17:59:17.748104   58869 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt with IP's: []
	I0914 17:59:17.845869   58869 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt ...
	I0914 17:59:17.845897   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt: {Name:mk74dbda75bd8161c4032753baabaf2aafdb6c03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.846077   58869 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key ...
	I0914 17:59:17.846099   58869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key: {Name:mkf4d25b5f584b927f1271ef3a5930dc4f76cf12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:59:17.846346   58869 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 17:59:17.846390   58869 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 17:59:17.846405   58869 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:59:17.846435   58869 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:59:17.846463   58869 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:59:17.846503   58869 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 17:59:17.846558   58869 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 17:59:17.847205   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:59:17.871811   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:59:17.895637   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:59:17.919416   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 17:59:17.943361   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 17:59:17.968518   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 17:59:17.995129   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:59:18.020058   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 17:59:18.043937   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 17:59:18.066888   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 17:59:18.093834   58869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:59:18.120867   58869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:59:18.148129   58869 ssh_runner.go:195] Run: openssl version
	I0914 17:59:18.155542   58869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:59:18.178002   58869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:59:18.184527   58869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:59:18.184600   58869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:59:18.195661   58869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:59:18.208046   58869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 17:59:18.219004   58869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 17:59:18.223596   58869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 17:59:18.223676   58869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 17:59:18.229166   58869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 17:59:18.239282   58869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 17:59:18.249788   58869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 17:59:18.254271   58869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 17:59:18.254345   58869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 17:59:18.260293   58869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 17:59:18.275466   58869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:59:18.279924   58869 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:59:18.279993   58869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:59:18.280086   58869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 17:59:18.280180   58869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:59:18.322744   58869 cri.go:89] found id: ""
	I0914 17:59:18.322828   58869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 17:59:18.332496   58869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 17:59:18.342272   58869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 17:59:18.351668   58869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 17:59:18.351694   58869 kubeadm.go:157] found existing configuration files:
	
	I0914 17:59:18.351744   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 17:59:18.361000   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 17:59:18.361074   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 17:59:18.371402   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 17:59:18.380884   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 17:59:18.380962   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 17:59:18.391024   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 17:59:18.401086   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 17:59:18.401157   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 17:59:18.411492   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 17:59:18.420836   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 17:59:18.420897   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 17:59:18.430446   58869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 17:59:18.536950   58869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 17:59:18.537080   58869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 17:59:18.686885   58869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 17:59:18.687017   58869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 17:59:18.687128   58869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 17:59:18.878320   58869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 17:59:18.941978   58869 out.go:235]   - Generating certificates and keys ...
	I0914 17:59:18.942098   58869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 17:59:18.942232   58869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 17:59:19.032813   58869 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 17:59:19.342427   58869 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 17:59:19.465411   58869 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 17:59:19.586765   58869 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 17:59:19.860884   58869 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 17:59:19.861449   58869 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-556121] and IPs [192.168.83.80 127.0.0.1 ::1]
	I0914 17:59:20.056525   58869 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 17:59:20.056724   58869 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-556121] and IPs [192.168.83.80 127.0.0.1 ::1]
	I0914 17:59:20.327760   58869 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 17:59:20.421703   58869 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 17:59:20.563779   58869 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 17:59:20.564025   58869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 17:59:20.719205   58869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 17:59:21.109744   58869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 17:59:21.352057   58869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 17:59:21.585753   58869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 17:59:21.604837   58869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 17:59:21.608901   58869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 17:59:21.609081   58869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 17:59:21.747627   58869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 17:59:21.749642   58869 out.go:235]   - Booting up control plane ...
	I0914 17:59:21.749790   58869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 17:59:21.758970   58869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 17:59:21.760096   58869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 17:59:21.760988   58869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 17:59:21.774177   58869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:00:01.741486   58869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:00:01.742195   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:01.742448   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:00:06.741763   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:06.742061   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:00:16.740802   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:16.741366   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:00:36.741253   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:00:36.741531   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:01:16.739708   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:01:16.739919   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:01:16.739930   58869 kubeadm.go:310] 
	I0914 18:01:16.739993   58869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:01:16.740061   58869 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:01:16.740070   58869 kubeadm.go:310] 
	I0914 18:01:16.740124   58869 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:01:16.740178   58869 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:01:16.740349   58869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:01:16.740376   58869 kubeadm.go:310] 
	I0914 18:01:16.740523   58869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:01:16.740585   58869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:01:16.740636   58869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:01:16.740648   58869 kubeadm.go:310] 
	I0914 18:01:16.740787   58869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:01:16.740911   58869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:01:16.740925   58869 kubeadm.go:310] 
	I0914 18:01:16.741054   58869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:01:16.741173   58869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:01:16.741284   58869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:01:16.741393   58869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:01:16.741415   58869 kubeadm.go:310] 
	I0914 18:01:16.741881   58869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:01:16.742013   58869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:01:16.742132   58869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:01:16.742388   58869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-556121] and IPs [192.168.83.80 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-556121] and IPs [192.168.83.80 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-556121] and IPs [192.168.83.80 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-556121] and IPs [192.168.83.80 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:01:16.742436   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:01:17.991424   58869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.248954104s)
	I0914 18:01:17.991525   58869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:01:18.005398   58869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:01:18.014912   58869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:01:18.014935   58869 kubeadm.go:157] found existing configuration files:
	
	I0914 18:01:18.014990   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:01:18.023888   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:01:18.023959   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:01:18.033298   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:01:18.042068   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:01:18.042131   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:01:18.051280   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:01:18.060590   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:01:18.060658   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:01:18.070598   58869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:01:18.079528   58869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:01:18.079616   58869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:01:18.088846   58869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:01:18.153451   58869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:01:18.153536   58869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:01:18.286076   58869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:01:18.286226   58869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:01:18.286356   58869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:01:18.472590   58869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:01:18.475855   58869 out.go:235]   - Generating certificates and keys ...
	I0914 18:01:18.475965   58869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:01:18.476059   58869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:01:18.476171   58869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:01:18.476263   58869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:01:18.476390   58869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:01:18.476509   58869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:01:18.476617   58869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:01:18.476700   58869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:01:18.476797   58869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:01:18.476884   58869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:01:18.476937   58869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:01:18.476988   58869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:01:18.617127   58869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:01:18.808731   58869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:01:18.951934   58869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:01:19.328137   58869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:01:19.344036   58869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:01:19.344156   58869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:01:19.344236   58869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:01:19.465625   58869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:01:19.467752   58869 out.go:235]   - Booting up control plane ...
	I0914 18:01:19.467893   58869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:01:19.470981   58869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:01:19.473576   58869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:01:19.474278   58869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:01:19.476916   58869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:01:59.478612   58869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:01:59.479029   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:01:59.479276   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:02:04.479769   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:02:04.479985   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:02:14.480829   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:02:14.481021   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:02:34.482400   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:02:34.482652   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:03:14.481987   58869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:03:14.482300   58869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:03:14.482342   58869 kubeadm.go:310] 
	I0914 18:03:14.482403   58869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:03:14.482453   58869 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:03:14.482462   58869 kubeadm.go:310] 
	I0914 18:03:14.482504   58869 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:03:14.482550   58869 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:03:14.482689   58869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:03:14.482702   58869 kubeadm.go:310] 
	I0914 18:03:14.482823   58869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:03:14.482868   58869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:03:14.482907   58869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:03:14.482915   58869 kubeadm.go:310] 
	I0914 18:03:14.483034   58869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:03:14.483137   58869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:03:14.483149   58869 kubeadm.go:310] 
	I0914 18:03:14.483356   58869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:03:14.483441   58869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:03:14.483507   58869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:03:14.483569   58869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:03:14.483576   58869 kubeadm.go:310] 
	I0914 18:03:14.484215   58869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:03:14.484361   58869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:03:14.484460   58869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:03:14.484551   58869 kubeadm.go:394] duration metric: took 3m56.204563968s to StartCluster
	I0914 18:03:14.484627   58869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:03:14.484706   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:03:14.525072   58869 cri.go:89] found id: ""
	I0914 18:03:14.525113   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.525126   58869 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:03:14.525133   58869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:03:14.525194   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:03:14.558129   58869 cri.go:89] found id: ""
	I0914 18:03:14.558180   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.558193   58869 logs.go:278] No container was found matching "etcd"
	I0914 18:03:14.558201   58869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:03:14.558289   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:03:14.591135   58869 cri.go:89] found id: ""
	I0914 18:03:14.591165   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.591175   58869 logs.go:278] No container was found matching "coredns"
	I0914 18:03:14.591182   58869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:03:14.591244   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:03:14.634334   58869 cri.go:89] found id: ""
	I0914 18:03:14.634362   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.634372   58869 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:03:14.634379   58869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:03:14.634439   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:03:14.669809   58869 cri.go:89] found id: ""
	I0914 18:03:14.669846   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.669860   58869 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:03:14.669869   58869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:03:14.669937   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:03:14.704889   58869 cri.go:89] found id: ""
	I0914 18:03:14.704912   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.704924   58869 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:03:14.704938   58869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:03:14.704993   58869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:03:14.736149   58869 cri.go:89] found id: ""
	I0914 18:03:14.736178   58869 logs.go:276] 0 containers: []
	W0914 18:03:14.736189   58869 logs.go:278] No container was found matching "kindnet"
	I0914 18:03:14.736208   58869 logs.go:123] Gathering logs for kubelet ...
	I0914 18:03:14.736223   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:03:14.785300   58869 logs.go:123] Gathering logs for dmesg ...
	I0914 18:03:14.785347   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:03:14.798649   58869 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:03:14.798679   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:03:14.910913   58869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:03:14.910934   58869 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:03:14.910949   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:03:15.014543   58869 logs.go:123] Gathering logs for container status ...
	I0914 18:03:15.014580   58869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:03:15.051605   58869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:03:15.051662   58869 out.go:270] * 
	* 
	W0914 18:03:15.051744   58869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:03:15.051762   58869 out.go:270] * 
	* 
	W0914 18:03:15.052941   58869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:03:15.055806   58869 out.go:201] 
	W0914 18:03:15.056727   58869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:03:15.056782   58869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:03:15.056805   58869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:03:15.058278   58869 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-556121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 6 (218.768216ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:15.316487   61953 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-556121" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (269.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-168587 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-168587 --alsologtostderr -v=3: exit status 82 (2m0.538584515s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-168587"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:00:55.460692   60503 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:00:55.460843   60503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:00:55.460856   60503 out.go:358] Setting ErrFile to fd 2...
	I0914 18:00:55.460862   60503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:00:55.461392   60503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:00:55.461904   60503 out.go:352] Setting JSON to false
	I0914 18:00:55.462030   60503 mustload.go:65] Loading cluster: no-preload-168587
	I0914 18:00:55.462945   60503 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:00:55.463023   60503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:00:55.463255   60503 mustload.go:65] Loading cluster: no-preload-168587
	I0914 18:00:55.463390   60503 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:00:55.463422   60503 stop.go:39] StopHost: no-preload-168587
	I0914 18:00:55.463820   60503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:00:55.463872   60503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:00:55.479666   60503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45949
	I0914 18:00:55.480247   60503 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:00:55.480920   60503 main.go:141] libmachine: Using API Version  1
	I0914 18:00:55.480937   60503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:00:55.481293   60503 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:00:55.484125   60503 out.go:177] * Stopping node "no-preload-168587"  ...
	I0914 18:00:55.486233   60503 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 18:00:55.486269   60503 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:00:55.486597   60503 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 18:00:55.486624   60503 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:00:55.490248   60503 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:00:55.490745   60503 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:00:55.490777   60503 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:00:55.490937   60503 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:00:55.491139   60503 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:00:55.491332   60503 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:00:55.491514   60503 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:00:55.611486   60503 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 18:00:55.668498   60503 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 18:00:55.732947   60503 main.go:141] libmachine: Stopping "no-preload-168587"...
	I0914 18:00:55.732985   60503 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:00:55.734752   60503 main.go:141] libmachine: (no-preload-168587) Calling .Stop
	I0914 18:00:55.738776   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 0/120
	I0914 18:00:56.740730   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 1/120
	I0914 18:00:57.742876   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 2/120
	I0914 18:00:58.745003   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 3/120
	I0914 18:00:59.746445   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 4/120
	I0914 18:01:00.748558   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 5/120
	I0914 18:01:01.749902   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 6/120
	I0914 18:01:02.751273   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 7/120
	I0914 18:01:03.752630   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 8/120
	I0914 18:01:04.754230   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 9/120
	I0914 18:01:05.756222   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 10/120
	I0914 18:01:06.757718   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 11/120
	I0914 18:01:07.759225   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 12/120
	I0914 18:01:08.760570   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 13/120
	I0914 18:01:09.761988   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 14/120
	I0914 18:01:10.763416   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 15/120
	I0914 18:01:11.765068   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 16/120
	I0914 18:01:12.766321   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 17/120
	I0914 18:01:13.767818   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 18/120
	I0914 18:01:14.769250   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 19/120
	I0914 18:01:15.771500   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 20/120
	I0914 18:01:16.773084   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 21/120
	I0914 18:01:17.774360   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 22/120
	I0914 18:01:18.777121   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 23/120
	I0914 18:01:19.778557   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 24/120
	I0914 18:01:20.780938   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 25/120
	I0914 18:01:21.783156   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 26/120
	I0914 18:01:22.784619   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 27/120
	I0914 18:01:23.786254   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 28/120
	I0914 18:01:24.787485   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 29/120
	I0914 18:01:25.788688   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 30/120
	I0914 18:01:26.790141   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 31/120
	I0914 18:01:27.791563   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 32/120
	I0914 18:01:28.793182   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 33/120
	I0914 18:01:29.794542   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 34/120
	I0914 18:01:30.796598   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 35/120
	I0914 18:01:31.798027   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 36/120
	I0914 18:01:32.799462   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 37/120
	I0914 18:01:33.801104   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 38/120
	I0914 18:01:34.802604   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 39/120
	I0914 18:01:35.804704   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 40/120
	I0914 18:01:36.806780   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 41/120
	I0914 18:01:37.808891   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 42/120
	I0914 18:01:38.810464   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 43/120
	I0914 18:01:39.812051   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 44/120
	I0914 18:01:40.814241   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 45/120
	I0914 18:01:41.815634   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 46/120
	I0914 18:01:42.817326   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 47/120
	I0914 18:01:43.819168   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 48/120
	I0914 18:01:44.820989   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 49/120
	I0914 18:01:45.823352   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 50/120
	I0914 18:01:46.825003   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 51/120
	I0914 18:01:47.826495   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 52/120
	I0914 18:01:48.828139   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 53/120
	I0914 18:01:49.829598   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 54/120
	I0914 18:01:50.831836   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 55/120
	I0914 18:01:51.833431   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 56/120
	I0914 18:01:52.834910   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 57/120
	I0914 18:01:53.836710   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 58/120
	I0914 18:01:54.838253   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 59/120
	I0914 18:01:55.839742   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 60/120
	I0914 18:01:56.841426   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 61/120
	I0914 18:01:57.843024   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 62/120
	I0914 18:01:58.844611   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 63/120
	I0914 18:01:59.846234   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 64/120
	I0914 18:02:00.848332   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 65/120
	I0914 18:02:01.849841   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 66/120
	I0914 18:02:02.851248   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 67/120
	I0914 18:02:03.852739   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 68/120
	I0914 18:02:04.854224   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 69/120
	I0914 18:02:05.856331   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 70/120
	I0914 18:02:06.858084   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 71/120
	I0914 18:02:07.859719   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 72/120
	I0914 18:02:08.861604   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 73/120
	I0914 18:02:09.863669   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 74/120
	I0914 18:02:10.866102   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 75/120
	I0914 18:02:11.867603   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 76/120
	I0914 18:02:12.869792   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 77/120
	I0914 18:02:13.872027   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 78/120
	I0914 18:02:14.873762   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 79/120
	I0914 18:02:15.875941   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 80/120
	I0914 18:02:16.877752   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 81/120
	I0914 18:02:17.879728   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 82/120
	I0914 18:02:18.881453   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 83/120
	I0914 18:02:19.883701   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 84/120
	I0914 18:02:20.885594   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 85/120
	I0914 18:02:21.887041   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 86/120
	I0914 18:02:22.888780   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 87/120
	I0914 18:02:23.890182   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 88/120
	I0914 18:02:24.891535   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 89/120
	I0914 18:02:25.893308   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 90/120
	I0914 18:02:26.895007   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 91/120
	I0914 18:02:27.896645   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 92/120
	I0914 18:02:28.898718   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 93/120
	I0914 18:02:29.900838   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 94/120
	I0914 18:02:30.902869   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 95/120
	I0914 18:02:31.904814   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 96/120
	I0914 18:02:32.906224   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 97/120
	I0914 18:02:33.907548   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 98/120
	I0914 18:02:34.909722   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 99/120
	I0914 18:02:35.911274   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 100/120
	I0914 18:02:36.912992   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 101/120
	I0914 18:02:37.914553   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 102/120
	I0914 18:02:38.917286   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 103/120
	I0914 18:02:39.919345   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 104/120
	I0914 18:02:40.921091   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 105/120
	I0914 18:02:41.922455   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 106/120
	I0914 18:02:42.923645   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 107/120
	I0914 18:02:43.925087   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 108/120
	I0914 18:02:44.926531   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 109/120
	I0914 18:02:45.928843   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 110/120
	I0914 18:02:46.930572   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 111/120
	I0914 18:02:47.932686   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 112/120
	I0914 18:02:48.934268   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 113/120
	I0914 18:02:49.935608   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 114/120
	I0914 18:02:50.937538   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 115/120
	I0914 18:02:51.939032   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 116/120
	I0914 18:02:52.940749   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 117/120
	I0914 18:02:53.942145   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 118/120
	I0914 18:02:54.943735   60503 main.go:141] libmachine: (no-preload-168587) Waiting for machine to stop 119/120
	I0914 18:02:55.945125   60503 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 18:02:55.945190   60503 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 18:02:55.946817   60503 out.go:201] 
	W0914 18:02:55.947888   60503 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 18:02:55.947912   60503 out.go:270] * 
	* 
	W0914 18:02:55.950900   60503 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:02:55.952039   60503 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-168587 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587: exit status 3 (18.63973462s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:14.594526   61420 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0914 18:03:14.594548   61420 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-168587" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-044534 --alsologtostderr -v=3
E0914 18:01:45.625809   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-044534 --alsologtostderr -v=3: exit status 82 (2m0.527073012s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-044534"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:01:33.544999   60755 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:01:33.545264   60755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:01:33.545275   60755 out.go:358] Setting ErrFile to fd 2...
	I0914 18:01:33.545282   60755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:01:33.545459   60755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:01:33.545711   60755 out.go:352] Setting JSON to false
	I0914 18:01:33.545803   60755 mustload.go:65] Loading cluster: embed-certs-044534
	I0914 18:01:33.546199   60755 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:01:33.546283   60755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:01:33.546457   60755 mustload.go:65] Loading cluster: embed-certs-044534
	I0914 18:01:33.546602   60755 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:01:33.546639   60755 stop.go:39] StopHost: embed-certs-044534
	I0914 18:01:33.547029   60755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:01:33.547075   60755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:01:33.562149   60755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46663
	I0914 18:01:33.562727   60755 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:01:33.563384   60755 main.go:141] libmachine: Using API Version  1
	I0914 18:01:33.563411   60755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:01:33.563772   60755 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:01:33.565972   60755 out.go:177] * Stopping node "embed-certs-044534"  ...
	I0914 18:01:33.567047   60755 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 18:01:33.567074   60755 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:01:33.567302   60755 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 18:01:33.567332   60755 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:01:33.570321   60755 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:01:33.570746   60755 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:01:33.570779   60755 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:01:33.570933   60755 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:01:33.571096   60755 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:01:33.571261   60755 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:01:33.571383   60755 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:01:33.677388   60755 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 18:01:33.736223   60755 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 18:01:33.798009   60755 main.go:141] libmachine: Stopping "embed-certs-044534"...
	I0914 18:01:33.798094   60755 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:01:33.800303   60755 main.go:141] libmachine: (embed-certs-044534) Calling .Stop
	I0914 18:01:33.804134   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 0/120
	I0914 18:01:34.805219   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 1/120
	I0914 18:01:35.807014   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 2/120
	I0914 18:01:36.808399   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 3/120
	I0914 18:01:37.809498   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 4/120
	I0914 18:01:38.811301   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 5/120
	I0914 18:01:39.812586   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 6/120
	I0914 18:01:40.814648   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 7/120
	I0914 18:01:41.816635   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 8/120
	I0914 18:01:42.817883   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 9/120
	I0914 18:01:43.820070   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 10/120
	I0914 18:01:44.821344   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 11/120
	I0914 18:01:45.823123   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 12/120
	I0914 18:01:46.824845   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 13/120
	I0914 18:01:47.826369   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 14/120
	I0914 18:01:48.828376   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 15/120
	I0914 18:01:49.829927   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 16/120
	I0914 18:01:50.831839   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 17/120
	I0914 18:01:51.833298   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 18/120
	I0914 18:01:52.834723   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 19/120
	I0914 18:01:53.836858   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 20/120
	I0914 18:01:54.838395   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 21/120
	I0914 18:01:55.840501   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 22/120
	I0914 18:01:56.841905   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 23/120
	I0914 18:01:57.843415   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 24/120
	I0914 18:01:58.845102   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 25/120
	I0914 18:01:59.847157   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 26/120
	I0914 18:02:00.848831   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 27/120
	I0914 18:02:01.850337   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 28/120
	I0914 18:02:02.852450   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 29/120
	I0914 18:02:03.854441   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 30/120
	I0914 18:02:04.855843   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 31/120
	I0914 18:02:05.857040   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 32/120
	I0914 18:02:06.858885   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 33/120
	I0914 18:02:07.860276   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 34/120
	I0914 18:02:08.862594   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 35/120
	I0914 18:02:09.864224   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 36/120
	I0914 18:02:10.865857   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 37/120
	I0914 18:02:11.867318   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 38/120
	I0914 18:02:12.869173   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 39/120
	I0914 18:02:13.871623   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 40/120
	I0914 18:02:14.873380   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 41/120
	I0914 18:02:15.875320   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 42/120
	I0914 18:02:16.877158   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 43/120
	I0914 18:02:17.879019   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 44/120
	I0914 18:02:18.881242   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 45/120
	I0914 18:02:19.883042   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 46/120
	I0914 18:02:20.884724   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 47/120
	I0914 18:02:21.886368   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 48/120
	I0914 18:02:22.888985   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 49/120
	I0914 18:02:23.891393   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 50/120
	I0914 18:02:24.892374   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 51/120
	I0914 18:02:25.893612   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 52/120
	I0914 18:02:26.895150   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 53/120
	I0914 18:02:27.897159   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 54/120
	I0914 18:02:28.899419   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 55/120
	I0914 18:02:29.900970   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 56/120
	I0914 18:02:30.902678   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 57/120
	I0914 18:02:31.904954   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 58/120
	I0914 18:02:32.906398   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 59/120
	I0914 18:02:33.908522   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 60/120
	I0914 18:02:34.909992   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 61/120
	I0914 18:02:35.911909   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 62/120
	I0914 18:02:36.913375   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 63/120
	I0914 18:02:37.914883   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 64/120
	I0914 18:02:38.917000   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 65/120
	I0914 18:02:39.918426   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 66/120
	I0914 18:02:40.919808   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 67/120
	I0914 18:02:41.921163   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 68/120
	I0914 18:02:42.923427   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 69/120
	I0914 18:02:43.924854   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 70/120
	I0914 18:02:44.926447   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 71/120
	I0914 18:02:45.928184   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 72/120
	I0914 18:02:46.929886   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 73/120
	I0914 18:02:47.932415   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 74/120
	I0914 18:02:48.934806   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 75/120
	I0914 18:02:49.937883   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 76/120
	I0914 18:02:50.940245   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 77/120
	I0914 18:02:51.941651   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 78/120
	I0914 18:02:52.943002   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 79/120
	I0914 18:02:53.944995   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 80/120
	I0914 18:02:54.946384   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 81/120
	I0914 18:02:55.948833   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 82/120
	I0914 18:02:56.950316   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 83/120
	I0914 18:02:57.952830   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 84/120
	I0914 18:02:58.955039   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 85/120
	I0914 18:02:59.956739   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 86/120
	I0914 18:03:00.957972   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 87/120
	I0914 18:03:01.959707   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 88/120
	I0914 18:03:02.961877   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 89/120
	I0914 18:03:03.963538   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 90/120
	I0914 18:03:04.965261   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 91/120
	I0914 18:03:05.966771   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 92/120
	I0914 18:03:06.968091   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 93/120
	I0914 18:03:07.969815   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 94/120
	I0914 18:03:08.971849   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 95/120
	I0914 18:03:09.973404   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 96/120
	I0914 18:03:10.975122   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 97/120
	I0914 18:03:11.976770   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 98/120
	I0914 18:03:12.979134   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 99/120
	I0914 18:03:13.981284   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 100/120
	I0914 18:03:14.982967   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 101/120
	I0914 18:03:15.984593   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 102/120
	I0914 18:03:16.986005   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 103/120
	I0914 18:03:17.987498   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 104/120
	I0914 18:03:18.989648   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 105/120
	I0914 18:03:19.991093   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 106/120
	I0914 18:03:20.992607   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 107/120
	I0914 18:03:21.993935   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 108/120
	I0914 18:03:22.995538   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 109/120
	I0914 18:03:23.997161   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 110/120
	I0914 18:03:24.998740   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 111/120
	I0914 18:03:26.000974   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 112/120
	I0914 18:03:27.002900   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 113/120
	I0914 18:03:28.004325   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 114/120
	I0914 18:03:29.006540   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 115/120
	I0914 18:03:30.008597   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 116/120
	I0914 18:03:31.011032   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 117/120
	I0914 18:03:32.013114   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 118/120
	I0914 18:03:33.019016   60755 main.go:141] libmachine: (embed-certs-044534) Waiting for machine to stop 119/120
	I0914 18:03:34.019557   60755 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 18:03:34.019628   60755 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 18:03:34.021985   60755 out.go:201] 
	W0914 18:03:34.023716   60755 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 18:03:34.023742   60755 out.go:270] * 
	* 
	W0914 18:03:34.026799   60755 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:03:34.029548   60755 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-044534 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534: exit status 3 (18.450183857s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:52.482474   62282 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host
	E0914 18:03:52.482497   62282 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-044534" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587: exit status 3 (3.167639158s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:17.762473   61922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0914 18:03:17.762492   61922 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-168587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-168587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151302308s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-168587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587: exit status 3 (3.064861286s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:26.978505   62134 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0914 18:03:26.978533   62134 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-168587" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-556121 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-556121 create -f testdata/busybox.yaml: exit status 1 (43.781626ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-556121" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-556121 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 6 (205.663967ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:15.567601   61992 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-556121" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 6 (212.990931ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:15.780045   62021 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-556121" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-556121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-556121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m44.779893239s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-556121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-556121 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-556121 describe deploy/metrics-server -n kube-system: exit status 1 (43.545054ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-556121" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-556121 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 6 (214.376387ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:05:00.817794   62867 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-556121" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534: exit status 3 (3.169229162s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:03:55.650558   62406 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host
	E0914 18:03:55.650577   62406 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-044534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-044534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151841447s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-044534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534: exit status 3 (3.06252693s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:04:04.866472   62523 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host
	E0914 18:04:04.866494   62523 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-044534" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-243449 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-243449 --alsologtostderr -v=3: exit status 82 (2m0.48205286s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-243449"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:04:08.959826   62655 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:04:08.959990   62655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:04:08.960004   62655 out.go:358] Setting ErrFile to fd 2...
	I0914 18:04:08.960011   62655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:04:08.960376   62655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:04:08.960774   62655 out.go:352] Setting JSON to false
	I0914 18:04:08.960910   62655 mustload.go:65] Loading cluster: default-k8s-diff-port-243449
	I0914 18:04:08.961473   62655 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:04:08.961566   62655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:04:08.961814   62655 mustload.go:65] Loading cluster: default-k8s-diff-port-243449
	I0914 18:04:08.961987   62655 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:04:08.962030   62655 stop.go:39] StopHost: default-k8s-diff-port-243449
	I0914 18:04:08.962634   62655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:04:08.962693   62655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:04:08.977999   62655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0914 18:04:08.978525   62655 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:04:08.979059   62655 main.go:141] libmachine: Using API Version  1
	I0914 18:04:08.979092   62655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:04:08.979522   62655 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:04:08.982112   62655 out.go:177] * Stopping node "default-k8s-diff-port-243449"  ...
	I0914 18:04:08.983314   62655 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0914 18:04:08.983360   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:04:08.983675   62655 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0914 18:04:08.983723   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:04:08.987020   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:04:08.987595   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:03:16 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:04:08.987627   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:04:08.987841   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:04:08.988046   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:04:08.988186   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:04:08.988322   62655 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:04:09.080100   62655 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0914 18:04:09.134199   62655 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0914 18:04:09.189892   62655 main.go:141] libmachine: Stopping "default-k8s-diff-port-243449"...
	I0914 18:04:09.189926   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:04:09.191962   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Stop
	I0914 18:04:09.195692   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 0/120
	I0914 18:04:10.197231   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 1/120
	I0914 18:04:11.198663   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 2/120
	I0914 18:04:12.200244   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 3/120
	I0914 18:04:13.201633   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 4/120
	I0914 18:04:14.204074   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 5/120
	I0914 18:04:15.205859   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 6/120
	I0914 18:04:16.207184   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 7/120
	I0914 18:04:17.208642   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 8/120
	I0914 18:04:18.210057   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 9/120
	I0914 18:04:19.211746   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 10/120
	I0914 18:04:20.213087   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 11/120
	I0914 18:04:21.214625   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 12/120
	I0914 18:04:22.216038   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 13/120
	I0914 18:04:23.217544   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 14/120
	I0914 18:04:24.219903   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 15/120
	I0914 18:04:25.221934   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 16/120
	I0914 18:04:26.223363   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 17/120
	I0914 18:04:27.224600   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 18/120
	I0914 18:04:28.226151   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 19/120
	I0914 18:04:29.227431   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 20/120
	I0914 18:04:30.228787   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 21/120
	I0914 18:04:31.230109   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 22/120
	I0914 18:04:32.231766   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 23/120
	I0914 18:04:33.233241   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 24/120
	I0914 18:04:34.235260   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 25/120
	I0914 18:04:35.237732   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 26/120
	I0914 18:04:36.239567   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 27/120
	I0914 18:04:37.241379   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 28/120
	I0914 18:04:38.243071   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 29/120
	I0914 18:04:39.245432   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 30/120
	I0914 18:04:40.246958   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 31/120
	I0914 18:04:41.248600   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 32/120
	I0914 18:04:42.250078   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 33/120
	I0914 18:04:43.251473   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 34/120
	I0914 18:04:44.253900   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 35/120
	I0914 18:04:45.255302   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 36/120
	I0914 18:04:46.256693   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 37/120
	I0914 18:04:47.258098   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 38/120
	I0914 18:04:48.259609   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 39/120
	I0914 18:04:49.262251   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 40/120
	I0914 18:04:50.263747   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 41/120
	I0914 18:04:51.265309   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 42/120
	I0914 18:04:52.266930   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 43/120
	I0914 18:04:53.268315   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 44/120
	I0914 18:04:54.270568   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 45/120
	I0914 18:04:55.272097   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 46/120
	I0914 18:04:56.273629   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 47/120
	I0914 18:04:57.274934   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 48/120
	I0914 18:04:58.276395   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 49/120
	I0914 18:04:59.278731   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 50/120
	I0914 18:05:00.280092   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 51/120
	I0914 18:05:01.281649   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 52/120
	I0914 18:05:02.283167   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 53/120
	I0914 18:05:03.284526   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 54/120
	I0914 18:05:04.286686   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 55/120
	I0914 18:05:05.288859   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 56/120
	I0914 18:05:06.290844   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 57/120
	I0914 18:05:07.292261   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 58/120
	I0914 18:05:08.293838   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 59/120
	I0914 18:05:09.295581   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 60/120
	I0914 18:05:10.296965   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 61/120
	I0914 18:05:11.298412   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 62/120
	I0914 18:05:12.299764   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 63/120
	I0914 18:05:13.301247   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 64/120
	I0914 18:05:14.303397   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 65/120
	I0914 18:05:15.304877   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 66/120
	I0914 18:05:16.306442   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 67/120
	I0914 18:05:17.307652   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 68/120
	I0914 18:05:18.308910   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 69/120
	I0914 18:05:19.311187   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 70/120
	I0914 18:05:20.312589   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 71/120
	I0914 18:05:21.313882   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 72/120
	I0914 18:05:22.315343   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 73/120
	I0914 18:05:23.316701   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 74/120
	I0914 18:05:24.318808   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 75/120
	I0914 18:05:25.320105   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 76/120
	I0914 18:05:26.322104   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 77/120
	I0914 18:05:27.323311   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 78/120
	I0914 18:05:28.324838   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 79/120
	I0914 18:05:29.326985   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 80/120
	I0914 18:05:30.328266   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 81/120
	I0914 18:05:31.329544   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 82/120
	I0914 18:05:32.330923   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 83/120
	I0914 18:05:33.332503   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 84/120
	I0914 18:05:34.334318   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 85/120
	I0914 18:05:35.335648   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 86/120
	I0914 18:05:36.337031   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 87/120
	I0914 18:05:37.338515   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 88/120
	I0914 18:05:38.339870   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 89/120
	I0914 18:05:39.342249   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 90/120
	I0914 18:05:40.343652   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 91/120
	I0914 18:05:41.345059   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 92/120
	I0914 18:05:42.346541   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 93/120
	I0914 18:05:43.347974   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 94/120
	I0914 18:05:44.350091   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 95/120
	I0914 18:05:45.351577   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 96/120
	I0914 18:05:46.353125   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 97/120
	I0914 18:05:47.354712   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 98/120
	I0914 18:05:48.356263   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 99/120
	I0914 18:05:49.357736   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 100/120
	I0914 18:05:50.359284   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 101/120
	I0914 18:05:51.360872   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 102/120
	I0914 18:05:52.362347   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 103/120
	I0914 18:05:53.363715   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 104/120
	I0914 18:05:54.365836   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 105/120
	I0914 18:05:55.367177   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 106/120
	I0914 18:05:56.368558   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 107/120
	I0914 18:05:57.369976   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 108/120
	I0914 18:05:58.371647   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 109/120
	I0914 18:05:59.373174   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 110/120
	I0914 18:06:00.374471   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 111/120
	I0914 18:06:01.375915   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 112/120
	I0914 18:06:02.377269   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 113/120
	I0914 18:06:03.378726   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 114/120
	I0914 18:06:04.380870   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 115/120
	I0914 18:06:05.382281   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 116/120
	I0914 18:06:06.383708   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 117/120
	I0914 18:06:07.385532   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 118/120
	I0914 18:06:08.386943   62655 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for machine to stop 119/120
	I0914 18:06:09.388522   62655 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0914 18:06:09.388578   62655 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0914 18:06:09.390776   62655 out.go:201] 
	W0914 18:06:09.392262   62655 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0914 18:06:09.392280   62655 out.go:270] * 
	* 
	W0914 18:06:09.394857   62655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:06:09.396124   62655 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-243449 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449: exit status 3 (18.477053908s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:06:27.874548   63244 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host
	E0914 18:06:27.874568   63244 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-243449" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (709.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-556121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-556121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m45.602481066s)

                                                
                                                
-- stdout --
	* [old-k8s-version-556121] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-556121" primary control-plane node in "old-k8s-version-556121" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:05:05.340813   62996 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:05:05.340916   62996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:05:05.340921   62996 out.go:358] Setting ErrFile to fd 2...
	I0914 18:05:05.340925   62996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:05:05.341126   62996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:05:05.341711   62996 out.go:352] Setting JSON to false
	I0914 18:05:05.342644   62996 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6449,"bootTime":1726330656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:05:05.342744   62996 start.go:139] virtualization: kvm guest
	I0914 18:05:05.344944   62996 out.go:177] * [old-k8s-version-556121] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:05:05.346289   62996 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:05:05.346350   62996 notify.go:220] Checking for updates...
	I0914 18:05:05.348946   62996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:05:05.350360   62996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:05:05.351554   62996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:05:05.352705   62996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:05:05.353950   62996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:05:05.355771   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:05:05.356136   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:05:05.356181   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:05:05.371164   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0914 18:05:05.371612   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:05:05.372169   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:05:05.372189   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:05:05.372549   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:05:05.372743   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:05:05.374830   62996 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 18:05:05.376741   62996 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:05:05.377088   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:05:05.377136   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:05:05.392353   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0914 18:05:05.392796   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:05:05.393314   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:05:05.393341   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:05:05.393647   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:05:05.393815   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:05:05.429186   62996 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:05:05.430443   62996 start.go:297] selected driver: kvm2
	I0914 18:05:05.430457   62996 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:05:05.430580   62996 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:05:05.431220   62996 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:05:05.431297   62996 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:05:05.446676   62996 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:05:05.447083   62996 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:05:05.447118   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:05:05.447157   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:05:05.447193   62996 start.go:340] cluster config:
	{Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:05:05.447303   62996 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:05:05.449222   62996 out.go:177] * Starting "old-k8s-version-556121" primary control-plane node in "old-k8s-version-556121" cluster
	I0914 18:05:05.450471   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:05:05.450505   62996 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 18:05:05.450516   62996 cache.go:56] Caching tarball of preloaded images
	I0914 18:05:05.450616   62996 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:05:05.450627   62996 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0914 18:05:05.450717   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:05:05.450890   62996 start.go:360] acquireMachinesLock for old-k8s-version-556121: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	* 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	* 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-556121 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (223.077507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-556121 logs -n 25: (1.73080523s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-319416                              | stopped-upgrade-319416       | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:06:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:06:40.299903   63448 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:40.300039   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300049   63448 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:40.300054   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300240   63448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:06:40.300801   63448 out.go:352] Setting JSON to false
	I0914 18:06:40.301779   63448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6544,"bootTime":1726330656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:06:40.301879   63448 start.go:139] virtualization: kvm guest
	I0914 18:06:40.303963   63448 out.go:177] * [default-k8s-diff-port-243449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:06:40.305394   63448 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:06:40.305429   63448 notify.go:220] Checking for updates...
	I0914 18:06:40.308148   63448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:06:40.309226   63448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:06:40.310360   63448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:06:40.311509   63448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:06:40.312543   63448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:06:40.314418   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:06:40.315063   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.315154   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.330033   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0914 18:06:40.330502   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.331014   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.331035   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.331372   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.331519   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.331729   63448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:06:40.332043   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.332089   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.346598   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0914 18:06:40.347021   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.347501   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.347536   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.347863   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.348042   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.380416   63448 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:06:40.381578   63448 start.go:297] selected driver: kvm2
	I0914 18:06:40.381589   63448 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.381693   63448 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:06:40.382390   63448 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.382478   63448 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:06:40.397521   63448 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:06:40.397921   63448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:06:40.397959   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:06:40.398002   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:06:40.398040   63448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.398145   63448 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.399920   63448 out.go:177] * Starting "default-k8s-diff-port-243449" primary control-plane node in "default-k8s-diff-port-243449" cluster
	I0914 18:06:39.170425   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:40.400913   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:06:40.400954   63448 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:06:40.400966   63448 cache.go:56] Caching tarball of preloaded images
	I0914 18:06:40.401038   63448 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:06:40.401055   63448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:06:40.401185   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:06:40.401421   63448 start.go:360] acquireMachinesLock for default-k8s-diff-port-243449: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:06:45.250426   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:48.322531   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:54.402441   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:57.474440   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:03.554541   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:06.626472   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:12.706430   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:15.778448   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:21.858453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:24.930473   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:31.010432   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:34.082423   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:40.162417   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:43.234501   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:49.314533   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:52.386453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:58.466444   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:01.538476   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:04.546206   62554 start.go:364] duration metric: took 3m59.524513317s to acquireMachinesLock for "embed-certs-044534"
	I0914 18:08:04.546263   62554 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:04.546275   62554 fix.go:54] fixHost starting: 
	I0914 18:08:04.546585   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:04.546636   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:04.562182   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0914 18:08:04.562704   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:04.563264   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:08:04.563300   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:04.563714   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:04.563947   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:04.564131   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:08:04.566043   62554 fix.go:112] recreateIfNeeded on embed-certs-044534: state=Stopped err=<nil>
	I0914 18:08:04.566073   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	W0914 18:08:04.566289   62554 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:04.567993   62554 out.go:177] * Restarting existing kvm2 VM for "embed-certs-044534" ...
	I0914 18:08:04.570182   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Start
	I0914 18:08:04.570431   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring networks are active...
	I0914 18:08:04.571374   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network default is active
	I0914 18:08:04.571748   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network mk-embed-certs-044534 is active
	I0914 18:08:04.572124   62554 main.go:141] libmachine: (embed-certs-044534) Getting domain xml...
	I0914 18:08:04.572852   62554 main.go:141] libmachine: (embed-certs-044534) Creating domain...
	I0914 18:08:04.540924   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:04.540957   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541310   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:08:04.541335   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541586   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:08:04.546055   62207 machine.go:96] duration metric: took 4m34.63489942s to provisionDockerMachine
	I0914 18:08:04.546096   62207 fix.go:56] duration metric: took 4m34.662932355s for fixHost
	I0914 18:08:04.546102   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 4m34.66297244s
	W0914 18:08:04.546122   62207 start.go:714] error starting host: provision: host is not running
	W0914 18:08:04.546220   62207 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 18:08:04.546231   62207 start.go:729] Will try again in 5 seconds ...
	I0914 18:08:05.812076   62554 main.go:141] libmachine: (embed-certs-044534) Waiting to get IP...
	I0914 18:08:05.812955   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:05.813302   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:05.813380   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:05.813279   63779 retry.go:31] will retry after 298.8389ms: waiting for machine to come up
	I0914 18:08:06.114130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.114575   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.114604   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.114530   63779 retry.go:31] will retry after 359.694721ms: waiting for machine to come up
	I0914 18:08:06.476183   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.476801   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.476828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.476745   63779 retry.go:31] will retry after 425.650219ms: waiting for machine to come up
	I0914 18:08:06.904358   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.904794   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.904816   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.904749   63779 retry.go:31] will retry after 433.157325ms: waiting for machine to come up
	I0914 18:08:07.339139   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.339578   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.339602   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.339512   63779 retry.go:31] will retry after 547.817102ms: waiting for machine to come up
	I0914 18:08:07.889390   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.889888   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.889993   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.889820   63779 retry.go:31] will retry after 603.749753ms: waiting for machine to come up
	I0914 18:08:08.495673   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:08.496047   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:08.496076   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:08.495995   63779 retry.go:31] will retry after 831.027535ms: waiting for machine to come up
	I0914 18:08:09.329209   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:09.329622   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:09.329643   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:09.329591   63779 retry.go:31] will retry after 1.429850518s: waiting for machine to come up
	I0914 18:08:09.548738   62207 start.go:360] acquireMachinesLock for no-preload-168587: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:10.761510   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:10.761884   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:10.761915   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:10.761839   63779 retry.go:31] will retry after 1.146619754s: waiting for machine to come up
	I0914 18:08:11.910130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:11.910542   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:11.910568   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:11.910500   63779 retry.go:31] will retry after 1.582382319s: waiting for machine to come up
	I0914 18:08:13.495352   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:13.495852   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:13.495872   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:13.495808   63779 retry.go:31] will retry after 2.117717335s: waiting for machine to come up
	I0914 18:08:15.615461   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:15.615896   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:15.615918   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:15.615846   63779 retry.go:31] will retry after 3.071486865s: waiting for machine to come up
	I0914 18:08:18.691109   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:18.691572   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:18.691605   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:18.691513   63779 retry.go:31] will retry after 4.250544955s: waiting for machine to come up
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:22.946247   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946662   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has current primary IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946687   62554 main.go:141] libmachine: (embed-certs-044534) Found IP for machine: 192.168.50.126
	I0914 18:08:22.946700   62554 main.go:141] libmachine: (embed-certs-044534) Reserving static IP address...
	I0914 18:08:22.947052   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.947068   62554 main.go:141] libmachine: (embed-certs-044534) Reserved static IP address: 192.168.50.126
	I0914 18:08:22.947080   62554 main.go:141] libmachine: (embed-certs-044534) DBG | skip adding static IP to network mk-embed-certs-044534 - found existing host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"}
	I0914 18:08:22.947093   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Getting to WaitForSSH function...
	I0914 18:08:22.947108   62554 main.go:141] libmachine: (embed-certs-044534) Waiting for SSH to be available...
	I0914 18:08:22.949354   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949623   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.949645   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949798   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH client type: external
	I0914 18:08:22.949822   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa (-rw-------)
	I0914 18:08:22.949886   62554 main.go:141] libmachine: (embed-certs-044534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:22.949911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | About to run SSH command:
	I0914 18:08:22.949926   62554 main.go:141] libmachine: (embed-certs-044534) DBG | exit 0
	I0914 18:08:23.074248   62554 main.go:141] libmachine: (embed-certs-044534) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:23.074559   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetConfigRaw
	I0914 18:08:23.075190   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.077682   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078007   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.078040   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078309   62554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:08:23.078494   62554 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:23.078510   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.078723   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.081444   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.081846   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.081891   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.082026   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.082209   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082398   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082573   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.082739   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.082961   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.082984   62554 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:23.186143   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:23.186193   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186424   62554 buildroot.go:166] provisioning hostname "embed-certs-044534"
	I0914 18:08:23.186447   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186622   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.189085   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189453   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.189482   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189615   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.189802   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190032   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190168   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.190422   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.190587   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.190601   62554 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-044534 && echo "embed-certs-044534" | sudo tee /etc/hostname
	I0914 18:08:23.307484   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-044534
	
	I0914 18:08:23.307512   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.310220   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.310664   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310764   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.310969   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311206   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311438   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.311594   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.311802   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.311820   62554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:23.422574   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:23.422603   62554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:23.422623   62554 buildroot.go:174] setting up certificates
	I0914 18:08:23.422634   62554 provision.go:84] configureAuth start
	I0914 18:08:23.422643   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.422905   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.426201   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426557   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.426584   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426745   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.428607   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.428985   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.429016   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.429138   62554 provision.go:143] copyHostCerts
	I0914 18:08:23.429198   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:23.429211   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:23.429295   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:23.429437   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:23.429452   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:23.429498   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:23.429592   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:23.429600   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:23.429626   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:23.429680   62554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044534 san=[127.0.0.1 192.168.50.126 embed-certs-044534 localhost minikube]
	I0914 18:08:23.538590   62554 provision.go:177] copyRemoteCerts
	I0914 18:08:23.538662   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:23.538689   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.541366   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541723   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.541746   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.542120   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.542303   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.542413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.623698   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:23.647378   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 18:08:23.671327   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:08:23.694570   62554 provision.go:87] duration metric: took 271.923979ms to configureAuth
	I0914 18:08:23.694598   62554 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:23.694779   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:08:23.694868   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.697467   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.697828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.697862   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.698042   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.698249   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698421   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698571   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.698692   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.698945   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.698963   62554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:23.911661   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:23.911697   62554 machine.go:96] duration metric: took 833.189197ms to provisionDockerMachine
	I0914 18:08:23.911712   62554 start.go:293] postStartSetup for "embed-certs-044534" (driver="kvm2")
	I0914 18:08:23.911726   62554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:23.911751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.912134   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:23.912169   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.914579   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.914974   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.915011   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.915121   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.915322   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.915582   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.915710   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.996910   62554 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:24.000900   62554 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:24.000926   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:24.000998   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:24.001099   62554 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:24.001222   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:24.010496   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:24.033377   62554 start.go:296] duration metric: took 121.65145ms for postStartSetup
	I0914 18:08:24.033414   62554 fix.go:56] duration metric: took 19.487140172s for fixHost
	I0914 18:08:24.033434   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.036188   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036494   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.036524   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036672   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.036886   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037082   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037216   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.037375   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:24.037542   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:24.037554   62554 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:24.142822   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337304.118879777
	
	I0914 18:08:24.142851   62554 fix.go:216] guest clock: 1726337304.118879777
	I0914 18:08:24.142862   62554 fix.go:229] Guest: 2024-09-14 18:08:24.118879777 +0000 UTC Remote: 2024-09-14 18:08:24.03341777 +0000 UTC m=+259.160200473 (delta=85.462007ms)
	I0914 18:08:24.142936   62554 fix.go:200] guest clock delta is within tolerance: 85.462007ms
	I0914 18:08:24.142960   62554 start.go:83] releasing machines lock for "embed-certs-044534", held for 19.596720856s
	I0914 18:08:24.142992   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.143262   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:24.146122   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146501   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.146537   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146711   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147204   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147430   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147532   62554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:24.147589   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.147813   62554 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:24.147839   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.150691   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.150736   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151012   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151056   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151149   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151179   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151431   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151468   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151586   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151772   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151944   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.152034   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.256821   62554 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:24.263249   62554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:24.411996   62554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:24.418685   62554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:24.418759   62554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:24.434541   62554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:24.434569   62554 start.go:495] detecting cgroup driver to use...
	I0914 18:08:24.434655   62554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:24.452550   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:24.467548   62554 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:24.467602   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:24.482556   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:24.497198   62554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:24.625300   62554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:24.805163   62554 docker.go:233] disabling docker service ...
	I0914 18:08:24.805248   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:24.821164   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:24.834886   62554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:24.963694   62554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:25.081720   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:25.097176   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:25.116611   62554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:08:25.116677   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.129500   62554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:25.129586   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.140281   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.150925   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.166139   62554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:25.177340   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.187662   62554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.207019   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.217207   62554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:25.226988   62554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:25.227065   62554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:25.248357   62554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:25.258467   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:25.375359   62554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:25.470389   62554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:25.470470   62554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:25.475526   62554 start.go:563] Will wait 60s for crictl version
	I0914 18:08:25.475589   62554 ssh_runner.go:195] Run: which crictl
	I0914 18:08:25.479131   62554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:25.530371   62554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:25.530461   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.557035   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.586883   62554 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:08:25.588117   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:25.591212   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591600   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:25.591628   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591816   62554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:25.595706   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:25.608009   62554 kubeadm.go:883] updating cluster {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:25.608141   62554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:08:25.608194   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:25.643422   62554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:08:25.643515   62554 ssh_runner.go:195] Run: which lz4
	I0914 18:08:25.647471   62554 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:25.651573   62554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:25.651607   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:08:26.985357   62554 crio.go:462] duration metric: took 1.337911722s to copy over tarball
	I0914 18:08:26.985437   62554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:29.111492   62554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126022567s)
	I0914 18:08:29.111524   62554 crio.go:469] duration metric: took 2.12613646s to extract the tarball
	I0914 18:08:29.111533   62554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:29.148426   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:29.190595   62554 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:08:29.190620   62554 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:08:29.190628   62554 kubeadm.go:934] updating node { 192.168.50.126 8443 v1.31.1 crio true true} ...
	I0914 18:08:29.190751   62554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-044534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:29.190823   62554 ssh_runner.go:195] Run: crio config
	I0914 18:08:29.234785   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:29.234808   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:29.234818   62554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:29.234871   62554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.126 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-044534 NodeName:embed-certs-044534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:08:29.234996   62554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-044534"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:29.235054   62554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:08:29.244554   62554 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:29.244631   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:29.253622   62554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 18:08:29.270046   62554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:29.285751   62554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 18:08:29.303567   62554 ssh_runner.go:195] Run: grep 192.168.50.126	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:29.307335   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:29.319510   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:29.442649   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:29.459657   62554 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534 for IP: 192.168.50.126
	I0914 18:08:29.459687   62554 certs.go:194] generating shared ca certs ...
	I0914 18:08:29.459709   62554 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:29.459908   62554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:29.459976   62554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:29.459995   62554 certs.go:256] generating profile certs ...
	I0914 18:08:29.460166   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/client.key
	I0914 18:08:29.460247   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key.15c978c5
	I0914 18:08:29.460301   62554 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key
	I0914 18:08:29.460447   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:29.460491   62554 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:29.460505   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:29.460537   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:29.460581   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:29.460605   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:29.460649   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:29.461415   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:29.501260   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:29.531940   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:29.577959   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:29.604067   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 18:08:29.635335   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:08:29.658841   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:29.684149   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:08:29.709354   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:29.733812   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:29.758427   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:29.783599   62554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:29.802188   62554 ssh_runner.go:195] Run: openssl version
	I0914 18:08:29.808277   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:29.821167   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825911   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825978   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.832160   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:29.844395   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:29.856943   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861671   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861730   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.867506   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:29.878004   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:29.890322   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.894985   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.895053   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.900837   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:29.911102   62554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:29.915546   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:29.921470   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:29.927238   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:29.933122   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:29.938829   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:29.944811   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:29.950679   62554 kubeadm.go:392] StartCluster: {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:29.950762   62554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:29.950866   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:29.987553   62554 cri.go:89] found id: ""
	I0914 18:08:29.987626   62554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:29.998690   62554 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:29.998713   62554 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:29.998765   62554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:30.009411   62554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:30.010804   62554 kubeconfig.go:125] found "embed-certs-044534" server: "https://192.168.50.126:8443"
	I0914 18:08:30.013635   62554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:30.023903   62554 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.126
	I0914 18:08:30.023937   62554 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:30.023951   62554 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:30.024017   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:30.067767   62554 cri.go:89] found id: ""
	I0914 18:08:30.067842   62554 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:30.087326   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:30.098162   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:30.098180   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:30.098218   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:30.108239   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:30.108296   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:30.118913   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:30.129091   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:30.129172   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:30.139658   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.148838   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:30.148923   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.158386   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:30.167282   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:30.167354   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:30.176443   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:30.185476   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:30.310603   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.243123   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.457657   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.531992   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.625580   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:31.625683   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.125744   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.626056   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.126817   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.146478   62554 api_server.go:72] duration metric: took 1.520896575s to wait for apiserver process to appear ...
	I0914 18:08:33.146517   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:08:33.146543   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:33.147106   62554 api_server.go:269] stopped: https://192.168.50.126:8443/healthz: Get "https://192.168.50.126:8443/healthz": dial tcp 192.168.50.126:8443: connect: connection refused
	I0914 18:08:33.646672   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:35.687169   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.687200   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:35.687221   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:35.737352   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.737410   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:36.146777   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.151156   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.151185   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:36.647380   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.655444   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.655477   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:37.146971   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:37.151233   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:08:37.160642   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:08:37.160671   62554 api_server.go:131] duration metric: took 4.014146932s to wait for apiserver health ...
	I0914 18:08:37.160679   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:37.160686   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:37.162836   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:08:37.164378   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:08:37.183377   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:08:37.210701   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:08:37.222258   62554 system_pods.go:59] 8 kube-system pods found
	I0914 18:08:37.222304   62554 system_pods.go:61] "coredns-7c65d6cfc9-59dm5" [55e67ff8-cf54-41fc-af46-160085787f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:08:37.222316   62554 system_pods.go:61] "etcd-embed-certs-044534" [932ca8e3-a777-4bb3-bdc2-6c1f1d293d4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:08:37.222331   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [f71e6720-c32c-426f-8620-b56eadf5e33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:08:37.222351   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [b93c261f-303f-43bb-8b33-4f97dc287809] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:08:37.222359   62554 system_pods.go:61] "kube-proxy-nkdth" [3762b613-c50f-4ba9-af52-371b139f9b6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:08:37.222368   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [65da2ca2-0405-4726-a2dc-dd13519c336a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:08:37.222377   62554 system_pods.go:61] "metrics-server-6867b74b74-stwfz" [ccc73057-4710-4e41-b643-d793d9b01175] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:08:37.222393   62554 system_pods.go:61] "storage-provisioner" [660fd3e3-ce57-4275-9fe1-bcceba75d8a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:08:37.222405   62554 system_pods.go:74] duration metric: took 11.676128ms to wait for pod list to return data ...
	I0914 18:08:37.222420   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:08:37.227047   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:08:37.227087   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:08:37.227104   62554 node_conditions.go:105] duration metric: took 4.678826ms to run NodePressure ...
	I0914 18:08:37.227124   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:37.510868   62554 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515839   62554 kubeadm.go:739] kubelet initialised
	I0914 18:08:37.515863   62554 kubeadm.go:740] duration metric: took 4.967389ms waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515871   62554 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:08:37.520412   62554 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:39.528469   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:42.028017   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:44.526502   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:45.103061   63448 start.go:364] duration metric: took 2m4.701591278s to acquireMachinesLock for "default-k8s-diff-port-243449"
	I0914 18:08:45.103116   63448 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:45.103124   63448 fix.go:54] fixHost starting: 
	I0914 18:08:45.103555   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:45.103626   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:45.120496   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0914 18:08:45.121098   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:45.122023   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:08:45.122050   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:45.122440   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:45.122631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:08:45.122792   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:08:45.124473   63448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-243449: state=Stopped err=<nil>
	I0914 18:08:45.124500   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	W0914 18:08:45.124633   63448 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:45.126255   63448 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-243449" ...
	I0914 18:08:45.127296   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Start
	I0914 18:08:45.127469   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring networks are active...
	I0914 18:08:45.128415   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network default is active
	I0914 18:08:45.128823   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network mk-default-k8s-diff-port-243449 is active
	I0914 18:08:45.129257   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Getting domain xml...
	I0914 18:08:45.130055   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Creating domain...
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.027201   62554 pod_ready.go:93] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:46.027223   62554 pod_ready.go:82] duration metric: took 8.506784658s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.027232   62554 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043468   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.043499   62554 pod_ready.go:82] duration metric: took 1.016259668s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043513   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050825   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.050853   62554 pod_ready.go:82] duration metric: took 7.332421ms for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050869   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561389   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.561419   62554 pod_ready.go:82] duration metric: took 510.541663ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561434   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568265   62554 pod_ready.go:93] pod "kube-proxy-nkdth" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.568298   62554 pod_ready.go:82] duration metric: took 6.854878ms for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568312   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575898   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:48.575924   62554 pod_ready.go:82] duration metric: took 1.00760412s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575934   62554 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.464001   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting to get IP...
	I0914 18:08:46.465004   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465408   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465512   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.465391   64066 retry.go:31] will retry after 283.185405ms: waiting for machine to come up
	I0914 18:08:46.751155   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751669   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751697   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.751622   64066 retry.go:31] will retry after 307.273139ms: waiting for machine to come up
	I0914 18:08:47.060812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061855   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061889   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.061749   64066 retry.go:31] will retry after 420.077307ms: waiting for machine to come up
	I0914 18:08:47.483188   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483611   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483656   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.483567   64066 retry.go:31] will retry after 562.15435ms: waiting for machine to come up
	I0914 18:08:48.047428   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047971   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.047867   64066 retry.go:31] will retry after 744.523152ms: waiting for machine to come up
	I0914 18:08:48.793959   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794449   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794492   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.794393   64066 retry.go:31] will retry after 813.631617ms: waiting for machine to come up
	I0914 18:08:49.609483   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:49.609904   64066 retry.go:31] will retry after 941.244861ms: waiting for machine to come up
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:50.583860   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:52.826559   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:50.553210   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553735   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553764   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:50.553671   64066 retry.go:31] will retry after 1.107692241s: waiting for machine to come up
	I0914 18:08:51.663218   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663723   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663753   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:51.663681   64066 retry.go:31] will retry after 1.357435642s: waiting for machine to come up
	I0914 18:08:53.022246   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022695   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022726   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:53.022628   64066 retry.go:31] will retry after 2.045434586s: waiting for machine to come up
	I0914 18:08:55.070946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071420   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:55.071362   64066 retry.go:31] will retry after 2.084823885s: waiting for machine to come up
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.084673   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.582709   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:59.583340   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.158301   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158679   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158705   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:57.158640   64066 retry.go:31] will retry after 2.492994369s: waiting for machine to come up
	I0914 18:08:59.654137   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654550   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654585   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:59.654496   64066 retry.go:31] will retry after 3.393327124s: waiting for machine to come up
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.466791   62207 start.go:364] duration metric: took 54.917996405s to acquireMachinesLock for "no-preload-168587"
	I0914 18:09:04.466845   62207 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:09:04.466863   62207 fix.go:54] fixHost starting: 
	I0914 18:09:04.467265   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:04.467303   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:04.485295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 18:09:04.485680   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:04.486195   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:09:04.486221   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:04.486625   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:04.486825   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:04.486985   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:09:04.488546   62207 fix.go:112] recreateIfNeeded on no-preload-168587: state=Stopped err=<nil>
	I0914 18:09:04.488584   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	W0914 18:09:04.488749   62207 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:09:04.491638   62207 out.go:177] * Restarting existing kvm2 VM for "no-preload-168587" ...
	I0914 18:09:02.082684   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:04.582135   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:03.051442   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051882   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has current primary IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051904   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Found IP for machine: 192.168.61.38
	I0914 18:09:03.051946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserving static IP address...
	I0914 18:09:03.052245   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.052269   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | skip adding static IP to network mk-default-k8s-diff-port-243449 - found existing host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"}
	I0914 18:09:03.052280   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserved static IP address: 192.168.61.38
	I0914 18:09:03.052289   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for SSH to be available...
	I0914 18:09:03.052306   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Getting to WaitForSSH function...
	I0914 18:09:03.054154   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054555   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.054596   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054745   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH client type: external
	I0914 18:09:03.054777   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa (-rw-------)
	I0914 18:09:03.054813   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:03.054828   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | About to run SSH command:
	I0914 18:09:03.054841   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | exit 0
	I0914 18:09:03.178065   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:03.178576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetConfigRaw
	I0914 18:09:03.179198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.181829   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182220   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.182242   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182541   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:09:03.182773   63448 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:03.182796   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:03.182992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.185635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186027   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.186056   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186213   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.186416   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186602   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186756   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.186882   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.187123   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.187137   63448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:03.290288   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:03.290332   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290592   63448 buildroot.go:166] provisioning hostname "default-k8s-diff-port-243449"
	I0914 18:09:03.290620   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290779   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.293587   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.293981   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.294012   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.294120   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.294307   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.294708   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.294926   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.294944   63448 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-243449 && echo "default-k8s-diff-port-243449" | sudo tee /etc/hostname
	I0914 18:09:03.418148   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-243449
	
	I0914 18:09:03.418198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.421059   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421501   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.421536   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421733   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.421925   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422075   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.422394   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.422581   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.422609   63448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-243449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-243449/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-243449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:03.538785   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:03.538812   63448 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:03.538851   63448 buildroot.go:174] setting up certificates
	I0914 18:09:03.538866   63448 provision.go:84] configureAuth start
	I0914 18:09:03.538875   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.539230   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.541811   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542129   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.542183   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542393   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.544635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.544933   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.544969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.545099   63448 provision.go:143] copyHostCerts
	I0914 18:09:03.545156   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:03.545167   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:03.545239   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:03.545362   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:03.545374   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:03.545410   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:03.545489   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:03.545498   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:03.545533   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:03.545619   63448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-243449 san=[127.0.0.1 192.168.61.38 default-k8s-diff-port-243449 localhost minikube]
	I0914 18:09:03.858341   63448 provision.go:177] copyRemoteCerts
	I0914 18:09:03.858415   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:03.858453   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.861376   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.861687   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861863   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.862062   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.862231   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.862359   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:03.944043   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:03.968175   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 18:09:03.990621   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:09:04.012163   63448 provision.go:87] duration metric: took 473.28607ms to configureAuth
	I0914 18:09:04.012190   63448 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:04.012364   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:04.012431   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.015021   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015505   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.015553   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015693   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.015866   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016035   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016157   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.016277   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.016479   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.016511   63448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:04.234672   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:04.234697   63448 machine.go:96] duration metric: took 1.051909541s to provisionDockerMachine
	I0914 18:09:04.234710   63448 start.go:293] postStartSetup for "default-k8s-diff-port-243449" (driver="kvm2")
	I0914 18:09:04.234721   63448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:04.234766   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.235108   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:04.235139   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.237583   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.237964   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.237997   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.238237   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.238491   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.238667   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.238798   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.320785   63448 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:04.324837   63448 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:04.324863   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:04.324920   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:04.325001   63448 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:04.325091   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:04.334235   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:04.357310   63448 start.go:296] duration metric: took 122.582935ms for postStartSetup
	I0914 18:09:04.357352   63448 fix.go:56] duration metric: took 19.25422843s for fixHost
	I0914 18:09:04.357373   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.360190   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360574   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.360601   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360774   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.360973   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361163   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361291   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.361479   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.361658   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.361667   63448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:04.466610   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337344.436836920
	
	I0914 18:09:04.466654   63448 fix.go:216] guest clock: 1726337344.436836920
	I0914 18:09:04.466665   63448 fix.go:229] Guest: 2024-09-14 18:09:04.43683692 +0000 UTC Remote: 2024-09-14 18:09:04.357356624 +0000 UTC m=+144.091633354 (delta=79.480296ms)
	I0914 18:09:04.466691   63448 fix.go:200] guest clock delta is within tolerance: 79.480296ms
	I0914 18:09:04.466702   63448 start.go:83] releasing machines lock for "default-k8s-diff-port-243449", held for 19.363604776s
	I0914 18:09:04.466737   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.466992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:04.469873   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470148   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.470198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470364   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.470877   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471098   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471215   63448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:04.471270   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.471322   63448 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:04.471346   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.474023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474144   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474374   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474471   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474616   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474637   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.474816   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474996   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474987   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.475136   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.475274   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.587233   63448 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:04.593065   63448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:04.738721   63448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:04.745472   63448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:04.745539   63448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:04.765742   63448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:04.765806   63448 start.go:495] detecting cgroup driver to use...
	I0914 18:09:04.765909   63448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:04.782234   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:04.797259   63448 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:04.797322   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:04.811794   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:04.826487   63448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:04.953417   63448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:05.102410   63448 docker.go:233] disabling docker service ...
	I0914 18:09:05.102491   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:05.117443   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:05.131147   63448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:05.278483   63448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.401195   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:05.415794   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:05.434594   63448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:05.434660   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.445566   63448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:05.445643   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.456690   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.468044   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.479719   63448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:05.491019   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.501739   63448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.520582   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.531469   63448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:05.541741   63448 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:05.541809   63448 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:05.561648   63448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:05.571882   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:05.706592   63448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:05.811522   63448 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:05.811599   63448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:05.816676   63448 start.go:563] Will wait 60s for crictl version
	I0914 18:09:05.816745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:09:05.820367   63448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:05.862564   63448 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:05.862637   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.893106   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.927136   63448 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:04.492847   62207 main.go:141] libmachine: (no-preload-168587) Calling .Start
	I0914 18:09:04.493070   62207 main.go:141] libmachine: (no-preload-168587) Ensuring networks are active...
	I0914 18:09:04.493844   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network default is active
	I0914 18:09:04.494193   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network mk-no-preload-168587 is active
	I0914 18:09:04.494614   62207 main.go:141] libmachine: (no-preload-168587) Getting domain xml...
	I0914 18:09:04.495434   62207 main.go:141] libmachine: (no-preload-168587) Creating domain...
	I0914 18:09:05.801470   62207 main.go:141] libmachine: (no-preload-168587) Waiting to get IP...
	I0914 18:09:05.802621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:05.803215   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:05.803351   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:05.803229   64231 retry.go:31] will retry after 206.528002ms: waiting for machine to come up
	I0914 18:09:06.011556   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.012027   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.012063   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.011977   64231 retry.go:31] will retry after 252.283679ms: waiting for machine to come up
	I0914 18:09:06.266621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.267145   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.267178   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.267093   64231 retry.go:31] will retry after 376.426781ms: waiting for machine to come up
	I0914 18:09:06.644639   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.645212   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.645245   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.645161   64231 retry.go:31] will retry after 518.904946ms: waiting for machine to come up
	I0914 18:09:06.584604   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:09.085179   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:05.928171   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:05.931131   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931584   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:05.931662   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931826   63448 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:05.935729   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:05.947741   63448 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:05.947872   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:05.947935   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:05.984371   63448 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:05.984473   63448 ssh_runner.go:195] Run: which lz4
	I0914 18:09:05.988311   63448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:09:05.992088   63448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:09:05.992123   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:09:07.311157   63448 crio.go:462] duration metric: took 1.322885925s to copy over tarball
	I0914 18:09:07.311297   63448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:09:09.472639   63448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161311106s)
	I0914 18:09:09.472663   63448 crio.go:469] duration metric: took 2.161473132s to extract the tarball
	I0914 18:09:09.472670   63448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:09:09.508740   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:09.554508   63448 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:09:09.554533   63448 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:09:09.554548   63448 kubeadm.go:934] updating node { 192.168.61.38 8444 v1.31.1 crio true true} ...
	I0914 18:09:09.554657   63448 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-243449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:09.554722   63448 ssh_runner.go:195] Run: crio config
	I0914 18:09:09.603693   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:09.603715   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:09.603727   63448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:09.603745   63448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-243449 NodeName:default-k8s-diff-port-243449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:09.603879   63448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-243449"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:09.603935   63448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:09.613786   63448 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:09.613863   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:09.623172   63448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0914 18:09:09.641437   63448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:09.657677   63448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:09:09.675042   63448 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:09.678885   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:09.694466   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:09.823504   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:09.840638   63448 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449 for IP: 192.168.61.38
	I0914 18:09:09.840658   63448 certs.go:194] generating shared ca certs ...
	I0914 18:09:09.840677   63448 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:09.840827   63448 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:09.840869   63448 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:09.840879   63448 certs.go:256] generating profile certs ...
	I0914 18:09:09.841046   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/client.key
	I0914 18:09:09.841147   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key.68770133
	I0914 18:09:09.841231   63448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key
	I0914 18:09:09.841342   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:09.841370   63448 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:09.841377   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:09.841398   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:09.841425   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:09.841447   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:09.841499   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:09.842211   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:09.883406   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:09.914134   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:09.941343   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:09.990870   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 18:09:10.040821   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:10.065238   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:10.089901   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:09:10.114440   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:10.138963   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:10.162828   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:10.185702   63448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:10.201251   63448 ssh_runner.go:195] Run: openssl version
	I0914 18:09:10.206904   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:10.216966   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221437   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221506   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.227033   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:10.237039   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:10.247244   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251434   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251494   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.257187   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:10.267490   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:10.277622   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281705   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281789   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.287013   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:10.296942   63448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.165576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.166187   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.166219   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.166125   64231 retry.go:31] will retry after 631.376012ms: waiting for machine to come up
	I0914 18:09:07.798978   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.799450   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.799478   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.799404   64231 retry.go:31] will retry after 668.764795ms: waiting for machine to come up
	I0914 18:09:08.470207   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:08.470613   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:08.470640   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:08.470559   64231 retry.go:31] will retry after 943.595216ms: waiting for machine to come up
	I0914 18:09:09.415274   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:09.415721   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:09.415751   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:09.415675   64231 retry.go:31] will retry after 956.638818ms: waiting for machine to come up
	I0914 18:09:10.374297   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:10.374875   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:10.374902   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:10.374822   64231 retry.go:31] will retry after 1.703915418s: waiting for machine to come up
	I0914 18:09:11.583370   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:14.082919   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:10.301352   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:10.307276   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:10.313391   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:10.319883   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:10.325671   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:10.331445   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:10.336855   63448 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:10.336953   63448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:10.337019   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.372899   63448 cri.go:89] found id: ""
	I0914 18:09:10.372988   63448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:10.386897   63448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:10.386920   63448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:10.386978   63448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:10.399165   63448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:10.400212   63448 kubeconfig.go:125] found "default-k8s-diff-port-243449" server: "https://192.168.61.38:8444"
	I0914 18:09:10.402449   63448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:10.414129   63448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.38
	I0914 18:09:10.414192   63448 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:10.414207   63448 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:10.414276   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.454549   63448 cri.go:89] found id: ""
	I0914 18:09:10.454627   63448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:10.472261   63448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:10.481693   63448 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:10.481724   63448 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:10.481772   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 18:09:10.492205   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:10.492283   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:10.502923   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 18:09:10.511620   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:10.511688   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:10.520978   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.529590   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:10.529652   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.538602   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 18:09:10.546968   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:10.547037   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:10.556280   63448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:10.565471   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:10.670297   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.611646   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.858308   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.942761   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:12.018144   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:12.018251   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.518933   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.019098   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.518297   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.018327   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.033874   63448 api_server.go:72] duration metric: took 2.015718891s to wait for apiserver process to appear ...
	I0914 18:09:14.033902   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:14.033926   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:14.034534   63448 api_server.go:269] stopped: https://192.168.61.38:8444/healthz: Get "https://192.168.61.38:8444/healthz": dial tcp 192.168.61.38:8444: connect: connection refused
	I0914 18:09:14.534065   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.080547   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:12.081149   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:12.081174   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:12.081095   64231 retry.go:31] will retry after 1.634645735s: waiting for machine to come up
	I0914 18:09:13.717239   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:13.717787   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:13.717821   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:13.717731   64231 retry.go:31] will retry after 2.524549426s: waiting for machine to come up
	I0914 18:09:16.244729   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:16.245132   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:16.245162   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:16.245072   64231 retry.go:31] will retry after 2.539965892s: waiting for machine to come up
	I0914 18:09:16.083603   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:18.581965   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:16.427071   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.427109   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.427156   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.440812   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.440848   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.534060   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.593356   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:16.593412   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.034545   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.039094   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.039131   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.534668   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.543018   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.543053   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.034612   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.039042   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.039071   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.534675   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.540612   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.540637   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.034196   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.040397   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.040429   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.535035   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.540910   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.540940   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:20.034275   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:20.038541   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:09:20.044704   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:20.044734   63448 api_server.go:131] duration metric: took 6.010822563s to wait for apiserver health ...
	I0914 18:09:20.044744   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:20.044752   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:20.046616   63448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:20.047724   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:20.058152   63448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:20.077880   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:20.090089   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:20.090135   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:20.090148   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:20.090178   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:20.090192   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:20.090199   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:09:20.090210   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:20.090219   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:20.090226   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:09:20.090236   63448 system_pods.go:74] duration metric: took 12.327834ms to wait for pod list to return data ...
	I0914 18:09:20.090248   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:20.094429   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:20.094455   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:20.094468   63448 node_conditions.go:105] duration metric: took 4.21448ms to run NodePressure ...
	I0914 18:09:20.094486   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.357111   63448 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361447   63448 kubeadm.go:739] kubelet initialised
	I0914 18:09:20.361469   63448 kubeadm.go:740] duration metric: took 4.331134ms waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361479   63448 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:20.367027   63448 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.371669   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371697   63448 pod_ready.go:82] duration metric: took 4.644689ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.371706   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371714   63448 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.376461   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376486   63448 pod_ready.go:82] duration metric: took 4.764316ms for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.376497   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376506   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.380607   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380632   63448 pod_ready.go:82] duration metric: took 4.117696ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.380642   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380649   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.481883   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481920   63448 pod_ready.go:82] duration metric: took 101.262101ms for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.481935   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481965   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.881501   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881541   63448 pod_ready.go:82] duration metric: took 399.559576ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.881556   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881566   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.282414   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282446   63448 pod_ready.go:82] duration metric: took 400.860884ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.282463   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282472   63448 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.681717   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681757   63448 pod_ready.go:82] duration metric: took 399.273892ms for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.681773   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681783   63448 pod_ready.go:39] duration metric: took 1.320292845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:21.681825   63448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:09:21.693644   63448 ops.go:34] apiserver oom_adj: -16
	I0914 18:09:21.693682   63448 kubeadm.go:597] duration metric: took 11.306754096s to restartPrimaryControlPlane
	I0914 18:09:21.693696   63448 kubeadm.go:394] duration metric: took 11.356851178s to StartCluster
	I0914 18:09:21.693719   63448 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.693820   63448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:09:21.695521   63448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.695793   63448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:09:21.695903   63448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:09:21.695982   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:21.696003   63448 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696021   63448 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696029   63448 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696041   63448 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:09:21.696044   63448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-243449"
	I0914 18:09:21.696063   63448 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696094   63448 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696108   63448 addons.go:243] addon metrics-server should already be in state true
	I0914 18:09:21.696149   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696074   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696411   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696455   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696543   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696562   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696693   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696735   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.697719   63448 out.go:177] * Verifying Kubernetes components...
	I0914 18:09:21.699171   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:21.712479   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0914 18:09:21.712563   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0914 18:09:21.713050   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713065   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713585   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713601   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713613   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713633   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713940   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714122   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.714135   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714737   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.714789   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.716503   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 18:09:21.716977   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.717490   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.717514   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.717872   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.718055   63448 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.718075   63448 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:09:21.718105   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.718432   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718484   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.718491   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718527   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.737248   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0914 18:09:21.738874   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.739437   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.739460   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.739865   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.740121   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.742251   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.744281   63448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:21.745631   63448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:21.745656   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:09:21.745682   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.749856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750398   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.750424   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.750886   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.751029   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.751187   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.756605   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0914 18:09:21.756825   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0914 18:09:21.757040   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757293   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757562   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.757588   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758058   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.758301   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.758322   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758325   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.758717   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.759300   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.759342   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.760557   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.762845   63448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:09:18.787883   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:18.788270   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:18.788298   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:18.788225   64231 retry.go:31] will retry after 4.53698887s: waiting for machine to come up
	I0914 18:09:21.764071   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:09:21.764092   63448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:09:21.764116   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.767725   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768255   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.768367   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768503   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.768681   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.768856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.769030   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.776783   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0914 18:09:21.777226   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.777736   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.777754   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.778113   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.778345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.780215   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.780421   63448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:21.780436   63448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:09:21.780458   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.783243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783671   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.783698   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783857   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.784023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.784138   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.784256   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.919649   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:21.945515   63448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:22.020487   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:09:22.020509   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:09:22.041265   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:22.072169   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:09:22.072199   63448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:09:22.112117   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.112148   63448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:09:22.146636   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:22.162248   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.520416   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520448   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.520793   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.520815   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.520831   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520833   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.520840   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.521074   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.521119   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.527992   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.528030   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.528578   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.528581   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.528605   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246463   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084175525s)
	I0914 18:09:23.246520   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246535   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246564   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099889297s)
	I0914 18:09:23.246609   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246621   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246835   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246876   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.246888   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246897   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246910   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246958   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247002   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247021   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.247046   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.247156   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.247192   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247227   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247260   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-243449"
	I0914 18:09:23.250385   63448 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 18:09:20.583600   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.083187   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.251609   63448 addons.go:510] duration metric: took 1.555716144s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 18:09:23.949715   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.327287   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327775   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has current primary IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327803   62207 main.go:141] libmachine: (no-preload-168587) Found IP for machine: 192.168.39.38
	I0914 18:09:23.327822   62207 main.go:141] libmachine: (no-preload-168587) Reserving static IP address...
	I0914 18:09:23.328197   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.328221   62207 main.go:141] libmachine: (no-preload-168587) Reserved static IP address: 192.168.39.38
	I0914 18:09:23.328264   62207 main.go:141] libmachine: (no-preload-168587) DBG | skip adding static IP to network mk-no-preload-168587 - found existing host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"}
	I0914 18:09:23.328283   62207 main.go:141] libmachine: (no-preload-168587) DBG | Getting to WaitForSSH function...
	I0914 18:09:23.328295   62207 main.go:141] libmachine: (no-preload-168587) Waiting for SSH to be available...
	I0914 18:09:23.330598   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.330954   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.330985   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.331105   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH client type: external
	I0914 18:09:23.331132   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa (-rw-------)
	I0914 18:09:23.331184   62207 main.go:141] libmachine: (no-preload-168587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:23.331196   62207 main.go:141] libmachine: (no-preload-168587) DBG | About to run SSH command:
	I0914 18:09:23.331208   62207 main.go:141] libmachine: (no-preload-168587) DBG | exit 0
	I0914 18:09:23.454525   62207 main.go:141] libmachine: (no-preload-168587) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:23.454883   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetConfigRaw
	I0914 18:09:23.455505   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.457696   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458030   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.458069   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458372   62207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:09:23.458611   62207 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:23.458633   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:23.458828   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.461199   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461540   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.461576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461705   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.461895   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462006   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462153   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.462314   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.462477   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.462488   62207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:23.566278   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:23.566310   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566559   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:09:23.566581   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566742   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.569254   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569590   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.569617   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569713   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.569888   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570009   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570174   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.570344   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.570556   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.570575   62207 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-168587 && echo "no-preload-168587" | sudo tee /etc/hostname
	I0914 18:09:23.687805   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-168587
	
	I0914 18:09:23.687848   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.690447   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.690824   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690955   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.691135   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691279   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691427   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.691590   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.691768   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.691790   62207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-168587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-168587/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-168587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:23.805502   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:23.805527   62207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:23.805545   62207 buildroot.go:174] setting up certificates
	I0914 18:09:23.805553   62207 provision.go:84] configureAuth start
	I0914 18:09:23.805561   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.805798   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.808306   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808643   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.808668   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808819   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.811055   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811374   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.811401   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811586   62207 provision.go:143] copyHostCerts
	I0914 18:09:23.811647   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:23.811657   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:23.811712   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:23.811800   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:23.811808   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:23.811829   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:23.811880   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:23.811887   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:23.811908   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:23.811956   62207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.no-preload-168587 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-168587]
	I0914 18:09:24.051868   62207 provision.go:177] copyRemoteCerts
	I0914 18:09:24.051936   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:24.051958   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.054842   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055107   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.055138   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055321   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.055514   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.055664   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.055804   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.140378   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:24.168422   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:09:24.194540   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:09:24.217910   62207 provision.go:87] duration metric: took 412.343545ms to configureAuth
	I0914 18:09:24.217942   62207 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:24.218180   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:24.218255   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.220788   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221216   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.221259   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221408   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.221678   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.221842   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.222033   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.222218   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.222399   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.222417   62207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:24.433203   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:24.433230   62207 machine.go:96] duration metric: took 974.605605ms to provisionDockerMachine
	I0914 18:09:24.433241   62207 start.go:293] postStartSetup for "no-preload-168587" (driver="kvm2")
	I0914 18:09:24.433253   62207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:24.433282   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.433595   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:24.433625   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.436247   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436710   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.436746   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436855   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.437015   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.437189   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.437305   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.516493   62207 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:24.520486   62207 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:24.520518   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:24.520612   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:24.520687   62207 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:24.520775   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:24.530274   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:24.553381   62207 start.go:296] duration metric: took 120.123302ms for postStartSetup
	I0914 18:09:24.553422   62207 fix.go:56] duration metric: took 20.086564499s for fixHost
	I0914 18:09:24.553445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.555832   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556100   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.556133   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556376   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.556605   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556772   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556922   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.557062   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.557275   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.557285   62207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:24.659101   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337364.632455119
	
	I0914 18:09:24.659128   62207 fix.go:216] guest clock: 1726337364.632455119
	I0914 18:09:24.659139   62207 fix.go:229] Guest: 2024-09-14 18:09:24.632455119 +0000 UTC Remote: 2024-09-14 18:09:24.553426386 +0000 UTC m=+357.567907862 (delta=79.028733ms)
	I0914 18:09:24.659165   62207 fix.go:200] guest clock delta is within tolerance: 79.028733ms
	I0914 18:09:24.659171   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 20.192350446s
	I0914 18:09:24.659209   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.659445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:24.662626   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663051   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.663082   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663225   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663802   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663972   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.664063   62207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:24.664114   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.664195   62207 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:24.664221   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.666971   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667255   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667398   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667433   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667555   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.667753   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.667787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667816   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667913   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.667988   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.668058   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.668109   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.668236   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.668356   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.743805   62207 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:24.776583   62207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:24.924635   62207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:24.930891   62207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:24.930979   62207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:24.952228   62207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:24.952258   62207 start.go:495] detecting cgroup driver to use...
	I0914 18:09:24.952344   62207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:24.967770   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:24.983218   62207 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:24.983280   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:24.997311   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:25.011736   62207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:25.135920   62207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:25.323727   62207 docker.go:233] disabling docker service ...
	I0914 18:09:25.323793   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:25.341243   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:25.358703   62207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:25.495826   62207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:25.621684   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:25.637386   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:25.655826   62207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:25.655947   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.669204   62207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:25.669266   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.680265   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.690860   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.702002   62207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:25.713256   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.724125   62207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.742195   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.752680   62207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:25.762842   62207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:25.762920   62207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:25.775680   62207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:25.785190   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:25.907175   62207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:25.995654   62207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:25.995731   62207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:26.000829   62207 start.go:563] Will wait 60s for crictl version
	I0914 18:09:26.000896   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.004522   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:26.041674   62207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:26.041745   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.069091   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.107475   62207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:26.108650   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:26.111782   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112110   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:26.112139   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112279   62207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:26.116339   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:26.128616   62207 kubeadm.go:883] updating cluster {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:26.128755   62207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:26.128796   62207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:26.165175   62207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:26.165197   62207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:09:26.165282   62207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.165301   62207 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 18:09:26.165302   62207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.165276   62207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.165346   62207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.165309   62207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.165443   62207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.165451   62207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.166853   62207 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 18:09:26.166858   62207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.166864   62207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.166873   62207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.166911   62207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.166928   62207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.366393   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.398127   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 18:09:26.401173   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.405861   62207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 18:09:26.405910   62207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.405982   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.410513   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.411414   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.416692   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.417710   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643066   62207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 18:09:26.643114   62207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.643177   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643195   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.643242   62207 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 18:09:26.643278   62207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 18:09:26.643293   62207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 18:09:26.643282   62207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.643307   62207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.643323   62207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.643328   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643351   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643366   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643386   62207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 18:09:26.643412   62207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643436   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.654984   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.655035   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.733881   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.733967   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.769624   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.778708   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.778836   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.778855   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.821344   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.821358   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.899012   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.906693   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.909875   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.916458   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.944355   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.949250   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 18:09:26.949400   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:25.582447   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:28.084142   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:25.949851   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:26.950390   63448 node_ready.go:49] node "default-k8s-diff-port-243449" has status "Ready":"True"
	I0914 18:09:26.950418   63448 node_ready.go:38] duration metric: took 5.004868966s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:26.950430   63448 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:26.956875   63448 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963909   63448 pod_ready.go:93] pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:26.963935   63448 pod_ready.go:82] duration metric: took 7.027533ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963945   63448 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971297   63448 pod_ready.go:93] pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.971327   63448 pod_ready.go:82] duration metric: took 2.007374825s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971340   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977510   63448 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.977535   63448 pod_ready.go:82] duration metric: took 6.18573ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977557   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.035840   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 18:09:27.035956   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:27.040828   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 18:09:27.040939   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 18:09:27.040941   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:27.041026   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:27.048278   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 18:09:27.048345   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 18:09:27.048388   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:27.048390   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 18:09:27.048446   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048423   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 18:09:27.048482   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048431   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:27.052221   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 18:09:27.052401   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 18:09:27.052585   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 18:09:27.330779   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.721998   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.673483443s)
	I0914 18:09:29.722035   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 18:09:29.722064   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722076   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.673496811s)
	I0914 18:09:29.722112   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 18:09:29.722112   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722194   62207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.391387893s)
	I0914 18:09:29.722236   62207 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 18:09:29.722257   62207 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.722297   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:31.485714   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.76356866s)
	I0914 18:09:31.485744   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 18:09:31.485764   62207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485817   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485820   62207 ssh_runner.go:235] Completed: which crictl: (1.763506603s)
	I0914 18:09:31.485862   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:30.583013   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:33.083597   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.985230   63448 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:31.984182   63448 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.984203   63448 pod_ready.go:82] duration metric: took 3.006637599s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.984212   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989786   63448 pod_ready.go:93] pod "kube-proxy-gbkqm" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.989812   63448 pod_ready.go:82] duration metric: took 5.592466ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989823   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994224   63448 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.994246   63448 pod_ready.go:82] duration metric: took 4.414059ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994258   63448 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:34.001035   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.781678   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.295763296s)
	I0914 18:09:34.781783   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:34.781814   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.295968995s)
	I0914 18:09:34.781840   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 18:09:34.781868   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:34.781900   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:36.744459   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.962646981s)
	I0914 18:09:36.744514   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.962587733s)
	I0914 18:09:36.744551   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 18:09:36.744576   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:36.744590   62207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:36.744658   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:35.582596   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.083260   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:36.002284   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.002962   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.848091   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.103407014s)
	I0914 18:09:38.848126   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 18:09:38.848152   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848217   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848153   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.103554199s)
	I0914 18:09:38.848283   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:09:38.848368   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307247   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.459002378s)
	I0914 18:09:40.307287   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 18:09:40.307269   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458886581s)
	I0914 18:09:40.307327   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 18:09:40.307334   62207 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307382   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.958177   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 18:09:40.958222   62207 cache_images.go:123] Successfully loaded all cached images
	I0914 18:09:40.958228   62207 cache_images.go:92] duration metric: took 14.793018447s to LoadCachedImages
	I0914 18:09:40.958241   62207 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.1 crio true true} ...
	I0914 18:09:40.958347   62207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-168587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:40.958415   62207 ssh_runner.go:195] Run: crio config
	I0914 18:09:41.003620   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:41.003643   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:41.003653   62207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:41.003674   62207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-168587 NodeName:no-preload-168587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:41.003850   62207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-168587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:41.003920   62207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:41.014462   62207 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:41.014541   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:41.023964   62207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 18:09:41.040206   62207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:41.055630   62207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 18:09:41.072881   62207 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:41.076449   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:41.090075   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:41.210405   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:41.228173   62207 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587 for IP: 192.168.39.38
	I0914 18:09:41.228197   62207 certs.go:194] generating shared ca certs ...
	I0914 18:09:41.228213   62207 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:41.228383   62207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:41.228443   62207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:41.228457   62207 certs.go:256] generating profile certs ...
	I0914 18:09:41.228586   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.key
	I0914 18:09:41.228667   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key.d11ec263
	I0914 18:09:41.228731   62207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key
	I0914 18:09:41.228889   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:41.228932   62207 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:41.228944   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:41.228976   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:41.229008   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:41.229045   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:41.229102   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:41.229913   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:41.259871   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:41.286359   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:41.315410   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:41.345541   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:09:41.380128   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:41.411130   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:41.442136   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:09:41.464823   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:41.488153   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:41.513788   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:41.537256   62207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:41.553550   62207 ssh_runner.go:195] Run: openssl version
	I0914 18:09:41.559366   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:41.571498   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576889   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576947   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.583651   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:41.594743   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:41.605811   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610034   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610103   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.615810   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:41.627145   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:41.639956   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644647   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644705   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.650281   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:41.662354   62207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:41.667150   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:41.673263   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:41.680660   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:41.687283   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:41.693256   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:41.698969   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:41.704543   62207 kubeadm.go:392] StartCluster: {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:41.704671   62207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:41.704750   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.741255   62207 cri.go:89] found id: ""
	I0914 18:09:41.741354   62207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:41.751360   62207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:41.751377   62207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:41.751417   62207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:41.761492   62207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:41.762591   62207 kubeconfig.go:125] found "no-preload-168587" server: "https://192.168.39.38:8443"
	I0914 18:09:41.764876   62207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:41.774868   62207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0914 18:09:41.774901   62207 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:41.774913   62207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:41.774969   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.810189   62207 cri.go:89] found id: ""
	I0914 18:09:41.810248   62207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:41.827903   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:41.837504   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:41.837532   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:41.837585   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:09:41.846260   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:41.846322   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:41.855350   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:09:41.864096   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:41.864153   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:41.874772   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.885427   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:41.885502   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.897121   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:09:41.906955   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:41.907020   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:41.918253   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:41.930134   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:40.084800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:42.581757   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:44.583611   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.502272   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:43.001471   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.054830   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.754174   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.973037   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.043041   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.119704   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:43.119805   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.620541   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.120849   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.139382   62207 api_server.go:72] duration metric: took 1.019679094s to wait for apiserver process to appear ...
	I0914 18:09:44.139406   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:44.139424   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:44.139876   62207 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0914 18:09:44.639981   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.262096   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.262132   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.262151   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.280626   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.280652   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.640152   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.646640   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:47.646676   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.140256   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.145520   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:48.145557   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.640147   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.645032   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:09:48.654567   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:48.654600   62207 api_server.go:131] duration metric: took 4.515188826s to wait for apiserver health ...
	I0914 18:09:48.654609   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:48.654615   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:48.656828   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:47.082431   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:49.582001   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.500938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:48.002332   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.658151   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:48.692232   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:48.734461   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:48.746689   62207 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:48.746723   62207 system_pods.go:61] "coredns-7c65d6cfc9-mwhvh" [38800077-a7ff-4c8c-8375-4efac2ae40b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:48.746733   62207 system_pods.go:61] "etcd-no-preload-168587" [bdb166bb-8c07-448c-a97c-2146e84f139b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:48.746744   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [8ad59d56-cb86-4028-bf16-3733eb32ad8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:48.746752   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [fd66d0aa-cc35-4330-aa6b-571dbeaa6490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:48.746761   62207 system_pods.go:61] "kube-proxy-lvp9h" [75c154d8-c76d-49eb-9497-dd17199e9d20] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:09:48.746771   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [858c948b-9025-48ab-907a-5b69aefbb24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:48.746782   62207 system_pods.go:61] "metrics-server-6867b74b74-n276z" [69e25ed4-dc8e-4c68-955e-e7226d066ac4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:48.746790   62207 system_pods.go:61] "storage-provisioner" [41c92694-2d3a-4025-8e28-ddea7b9b9c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:09:48.746801   62207 system_pods.go:74] duration metric: took 12.315296ms to wait for pod list to return data ...
	I0914 18:09:48.746811   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:48.751399   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:48.751428   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:48.751440   62207 node_conditions.go:105] duration metric: took 4.625335ms to run NodePressure ...
	I0914 18:09:48.751461   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:49.051211   62207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057333   62207 kubeadm.go:739] kubelet initialised
	I0914 18:09:49.057366   62207 kubeadm.go:740] duration metric: took 6.124032ms waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057379   62207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:49.062570   62207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:51.069219   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:51.588043   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:54.082931   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.499759   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:52.502450   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.000767   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.069338   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:53.570290   62207 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:53.570323   62207 pod_ready.go:82] duration metric: took 4.507716999s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:53.570333   62207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:55.577317   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:56.581937   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:58.583632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:57.000913   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.001429   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:57.578803   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.576525   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:59.576547   62207 pod_ready.go:82] duration metric: took 6.006207927s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:59.576556   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084027   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.084054   62207 pod_ready.go:82] duration metric: took 507.490867ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084067   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089044   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.089068   62207 pod_ready.go:82] duration metric: took 4.991847ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089079   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093160   62207 pod_ready.go:93] pod "kube-proxy-lvp9h" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.093179   62207 pod_ready.go:82] duration metric: took 4.093257ms for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093198   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096786   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.096800   62207 pod_ready.go:82] duration metric: took 3.594996ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096807   62207 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:01.082601   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:03.581290   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:01.502864   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.001645   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.103118   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.103451   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.603049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.582900   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.083020   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.500670   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.500752   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:08.604603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.103035   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:10.581739   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:12.582523   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:14.582676   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.000857   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:13.000915   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.001474   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:13.103818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.602921   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.082737   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:19.082771   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.501947   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.000464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:18.102598   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.104053   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:21.085272   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:23.583185   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:22.001396   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.500424   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:22.602815   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.603230   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.604416   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.082291   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:28.582007   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.501088   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:29.001400   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:28.604725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.103833   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:30.582800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.082884   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.001447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.501525   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:33.603121   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.604573   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.083689   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:37.583721   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:36.000829   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:38.001022   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.002742   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:38.102910   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.103826   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.082907   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.083587   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:44.582457   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.500851   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.001069   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.603017   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.104603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.082010   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:49.082648   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.500917   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.001192   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:47.603610   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.104612   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.083246   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:53.582120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:52.001273   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:54.001308   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:52.603604   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.104734   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.582258   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.081905   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:56.002210   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.499961   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.604025   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.104193   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.083208   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.582299   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.583803   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.500596   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.501405   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.501527   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:02.602818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:05.103061   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:07.083161   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.585702   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.999885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.001611   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:07.603634   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.605513   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.082145   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.082616   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:11.500951   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.001238   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:12.103165   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.103338   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.103440   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.083173   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.582225   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.001344   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.501001   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:18.603529   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.607347   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.583176   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.082414   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.501464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.000161   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.000597   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:23.103266   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.104091   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.082804   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.582345   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.582482   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.001813   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.500751   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.105655   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.602893   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:32.082229   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.082649   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:31.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.001997   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:32.103194   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.103615   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.603139   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.083120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.582197   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.500663   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.501005   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:38.603823   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:41.102313   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.583132   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.082387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.501996   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.000447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:43.103756   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.082762   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.582353   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:49.584026   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.500451   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.501831   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.001778   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:47.605117   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.103023   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.082751   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.582016   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.501977   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.000564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.603110   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.603267   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:56.582121   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:58.583277   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:57.001624   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.501328   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:57.102934   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.103345   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.604125   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.083045   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:03.582885   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.501890   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:04.001023   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.103388   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.103725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.083003   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.581415   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.002052   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.500393   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:08.602880   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.604231   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.582647   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.082818   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.500953   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.001310   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.103254   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.103641   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.083238   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.582387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.501029   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.505028   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.001929   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:17.103996   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:19.603109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:21.603150   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.083547   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.586009   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.002367   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:24.500394   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:23.603351   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:26.103668   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.082824   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.582534   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.001617   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:29.002224   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:28.104044   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.602717   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.083027   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:32.083454   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:34.582698   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.002459   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:33.500924   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.603673   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.103528   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:37.081632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.082269   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.501391   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:38.000592   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:40.001479   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:37.602537   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.603422   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.082861   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:43.583680   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:42.002259   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.500927   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:42.103521   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.603744   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:46.082955   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:48.576914   62554 pod_ready.go:82] duration metric: took 4m0.000963266s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	E0914 18:12:48.576953   62554 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:12:48.576972   62554 pod_ready.go:39] duration metric: took 4m11.061091965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:12:48.576996   62554 kubeadm.go:597] duration metric: took 4m18.578277603s to restartPrimaryControlPlane
	W0914 18:12:48.577052   62554 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:48.577082   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:46.501278   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.001649   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.103834   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.104052   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.603479   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.500469   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:53.500885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.604109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.104639   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:55.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:57.501413   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:00.000020   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:12:58.604461   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:01.102988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:02.001195   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:04.001938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:03.603700   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.103294   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.501564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:09.002049   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:08.604408   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:11.103401   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:14.788734   62554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.2116254s)
	I0914 18:13:14.788816   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:14.810488   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:13:14.827773   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:13:14.846933   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:13:14.846958   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:13:14.847011   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:13:14.859886   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:13:14.859954   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:13:14.882400   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:13:14.896700   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:13:14.896779   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:13:14.908567   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.920718   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:13:14.920791   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.930849   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:13:14.940757   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:13:14.940829   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:13:14.950828   62554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:13:15.000219   62554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:13:15.000292   62554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:13:15.116662   62554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:13:15.116830   62554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:13:15.116937   62554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:13:15.128493   62554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:13:11.002219   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:13.500397   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.130231   62554 out.go:235]   - Generating certificates and keys ...
	I0914 18:13:15.130322   62554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:13:15.130412   62554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:13:15.130513   62554 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:13:15.130642   62554 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:13:15.130762   62554 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:13:15.130842   62554 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:13:15.130927   62554 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:13:15.131020   62554 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:13:15.131131   62554 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:13:15.131235   62554 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:13:15.131325   62554 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:13:15.131417   62554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:13:15.454691   62554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:13:15.653046   62554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:13:15.704029   62554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:13:15.846280   62554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:13:15.926881   62554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:13:15.927633   62554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:13:15.932596   62554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:13:13.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.603335   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.934499   62554 out.go:235]   - Booting up control plane ...
	I0914 18:13:15.934626   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:13:15.934761   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:13:15.934913   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:13:15.952982   62554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:13:15.961449   62554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:13:15.961526   62554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:13:16.102126   62554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:13:16.102335   62554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:13:16.604217   62554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.082287ms
	I0914 18:13:16.604330   62554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:13:15.501231   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:17.501427   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:19.501641   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.609408   62554 kubeadm.go:310] [api-check] The API server is healthy after 5.002255971s
	I0914 18:13:21.622798   62554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:13:21.637103   62554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:13:21.676498   62554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:13:21.676739   62554 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-044534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:13:21.697522   62554 kubeadm.go:310] [bootstrap-token] Using token: oo4rrp.xx4py1wjxiu1i6la
	I0914 18:13:17.604060   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:20.103115   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.699311   62554 out.go:235]   - Configuring RBAC rules ...
	I0914 18:13:21.699462   62554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:13:21.711614   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:13:21.721449   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:13:21.727812   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:13:21.733486   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:13:21.747521   62554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:13:22.014670   62554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:13:22.463865   62554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:13:23.016165   62554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:13:23.016195   62554 kubeadm.go:310] 
	I0914 18:13:23.016257   62554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:13:23.016265   62554 kubeadm.go:310] 
	I0914 18:13:23.016385   62554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:13:23.016415   62554 kubeadm.go:310] 
	I0914 18:13:23.016456   62554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:13:23.016542   62554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:13:23.016627   62554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:13:23.016637   62554 kubeadm.go:310] 
	I0914 18:13:23.016753   62554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:13:23.016778   62554 kubeadm.go:310] 
	I0914 18:13:23.016850   62554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:13:23.016860   62554 kubeadm.go:310] 
	I0914 18:13:23.016937   62554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:13:23.017051   62554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:13:23.017142   62554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:13:23.017156   62554 kubeadm.go:310] 
	I0914 18:13:23.017284   62554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:13:23.017403   62554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:13:23.017419   62554 kubeadm.go:310] 
	I0914 18:13:23.017533   62554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.017664   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:13:23.017700   62554 kubeadm.go:310] 	--control-plane 
	I0914 18:13:23.017710   62554 kubeadm.go:310] 
	I0914 18:13:23.017821   62554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:13:23.017832   62554 kubeadm.go:310] 
	I0914 18:13:23.017944   62554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.018104   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:13:23.019098   62554 kubeadm.go:310] W0914 18:13:14.968906    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019512   62554 kubeadm.go:310] W0914 18:13:14.970621    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019672   62554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:13:23.019690   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:13:23.019704   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:13:23.021459   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:13:23.022517   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:13:23.037352   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:13:23.062037   62554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:13:23.062132   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.062202   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044534 minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=embed-certs-044534 minikube.k8s.io/primary=true
	I0914 18:13:23.089789   62554 ops.go:34] apiserver oom_adj: -16
	I0914 18:13:23.246478   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.747419   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.247388   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.746913   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:21.502222   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.001757   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:25.247445   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:25.747417   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.247440   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.747262   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.847454   62554 kubeadm.go:1113] duration metric: took 3.78538549s to wait for elevateKubeSystemPrivileges
	I0914 18:13:26.847496   62554 kubeadm.go:394] duration metric: took 4m56.896825398s to StartCluster
	I0914 18:13:26.847521   62554 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.847618   62554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:13:26.850148   62554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.850488   62554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:13:26.850562   62554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:13:26.850672   62554 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-044534"
	I0914 18:13:26.850690   62554 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-044534"
	W0914 18:13:26.850703   62554 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:13:26.850715   62554 addons.go:69] Setting default-storageclass=true in profile "embed-certs-044534"
	I0914 18:13:26.850734   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.850753   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:13:26.850752   62554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044534"
	I0914 18:13:26.850716   62554 addons.go:69] Setting metrics-server=true in profile "embed-certs-044534"
	I0914 18:13:26.850844   62554 addons.go:234] Setting addon metrics-server=true in "embed-certs-044534"
	W0914 18:13:26.850860   62554 addons.go:243] addon metrics-server should already be in state true
	I0914 18:13:26.850898   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.851174   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851204   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851214   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851235   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851250   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851273   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.852030   62554 out.go:177] * Verifying Kubernetes components...
	I0914 18:13:26.853580   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:13:26.868084   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0914 18:13:26.868135   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0914 18:13:26.868700   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.868787   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.869251   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869282   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.869637   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.869650   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869714   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.870039   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.870232   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.870396   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.870454   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.871718   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0914 18:13:26.872337   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.872842   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.872870   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.873227   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.873942   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.873989   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.874235   62554 addons.go:234] Setting addon default-storageclass=true in "embed-certs-044534"
	W0914 18:13:26.874257   62554 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:13:26.874287   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.874674   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.874721   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.887685   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0914 18:13:26.888211   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.888735   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.888753   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.889060   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.889233   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.891040   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.892012   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0914 18:13:26.892352   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.892798   62554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:13:26.892812   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.892845   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.893321   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.893987   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.894040   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.894059   62554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:26.894078   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:13:26.894102   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.897218   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0914 18:13:26.897776   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.897932   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.898631   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.898669   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.899315   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.899382   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.899395   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.899557   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.899698   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.899873   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.900433   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.900668   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.902863   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.904569   62554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:13:22.104620   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.603793   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.604247   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.905708   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:13:26.905729   62554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:13:26.905755   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.910848   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911333   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.911430   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911568   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.911840   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.912025   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.912238   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.912625   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0914 18:13:26.913014   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.913653   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.913668   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.914116   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.914342   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.916119   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.916332   62554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:26.916350   62554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:13:26.916369   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.920129   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920769   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.920791   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920971   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.921170   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.921291   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.921413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:27.055184   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:13:27.072683   62554 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084289   62554 node_ready.go:49] node "embed-certs-044534" has status "Ready":"True"
	I0914 18:13:27.084317   62554 node_ready.go:38] duration metric: took 11.599354ms for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084326   62554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:27.090428   62554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:27.258854   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:27.260576   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:27.261092   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:13:27.261115   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:13:27.332882   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:13:27.332914   62554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:13:27.400159   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:27.400193   62554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:13:27.486731   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:28.164139   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164171   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164215   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164242   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164581   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164593   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164596   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164597   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164608   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164569   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164619   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164621   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164627   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164629   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164874   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164897   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164902   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164929   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164941   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196171   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.196197   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.196530   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.196590   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.509915   62554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023114908s)
	I0914 18:13:28.509973   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.509989   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510276   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510329   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510348   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510365   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.510374   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510614   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510653   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510665   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510678   62554 addons.go:475] Verifying addon metrics-server=true in "embed-certs-044534"
	I0914 18:13:28.512283   62554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:13:28.513593   62554 addons.go:510] duration metric: took 1.663035459s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:13:29.103964   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.501135   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.502181   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.605176   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.102817   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.596452   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:33.596699   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.001070   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:32.001946   63448 pod_ready.go:82] duration metric: took 4m0.00767403s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:13:32.001975   63448 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:13:32.001987   63448 pod_ready.go:39] duration metric: took 4m5.051544016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:32.002004   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:32.002037   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:32.002093   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:32.053241   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.053276   63448 cri.go:89] found id: ""
	I0914 18:13:32.053287   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:32.053349   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.057854   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:32.057921   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:32.099294   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:32.099318   63448 cri.go:89] found id: ""
	I0914 18:13:32.099328   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:32.099375   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.103674   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:32.103745   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:32.144190   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:32.144219   63448 cri.go:89] found id: ""
	I0914 18:13:32.144228   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:32.144275   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.148382   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:32.148443   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:32.185779   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:32.185807   63448 cri.go:89] found id: ""
	I0914 18:13:32.185814   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:32.185864   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.189478   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:32.189545   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:32.224657   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.224681   63448 cri.go:89] found id: ""
	I0914 18:13:32.224690   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:32.224745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.228421   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:32.228494   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:32.262491   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:32.262513   63448 cri.go:89] found id: ""
	I0914 18:13:32.262519   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:32.262579   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.266135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:32.266213   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:32.300085   63448 cri.go:89] found id: ""
	I0914 18:13:32.300111   63448 logs.go:276] 0 containers: []
	W0914 18:13:32.300119   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:32.300124   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:32.300181   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:32.335359   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:32.335379   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.335387   63448 cri.go:89] found id: ""
	I0914 18:13:32.335393   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:32.335451   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.339404   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.343173   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:32.343203   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.378987   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:32.379016   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.418829   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:32.418855   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:32.941046   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:32.941102   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.998148   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:32.998209   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:33.041208   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:33.041241   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:33.080774   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:33.080806   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:33.130519   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:33.130552   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:33.182751   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:33.182788   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:33.222008   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:33.222053   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:33.263100   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:33.263137   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:33.330307   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:33.330343   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:33.344658   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:33.344687   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:35.597157   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:35.597179   62554 pod_ready.go:82] duration metric: took 8.50672651s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:35.597189   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604147   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.604179   62554 pod_ready.go:82] duration metric: took 1.006982094s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604192   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610278   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.610302   62554 pod_ready.go:82] duration metric: took 6.101843ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610315   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615527   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.615549   62554 pod_ready.go:82] duration metric: took 5.226206ms for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615559   62554 pod_ready.go:39] duration metric: took 9.531222215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:36.615587   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:36.615642   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.630381   62554 api_server.go:72] duration metric: took 9.779851335s to wait for apiserver process to appear ...
	I0914 18:13:36.630414   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.630438   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:13:36.637559   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:13:36.639973   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:36.639999   62554 api_server.go:131] duration metric: took 9.577574ms to wait for apiserver health ...
	I0914 18:13:36.640006   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:36.647412   62554 system_pods.go:59] 9 kube-system pods found
	I0914 18:13:36.647443   62554 system_pods.go:61] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.647448   62554 system_pods.go:61] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.647452   62554 system_pods.go:61] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.647456   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.647459   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.647463   62554 system_pods.go:61] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.647465   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.647471   62554 system_pods.go:61] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.647475   62554 system_pods.go:61] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.647483   62554 system_pods.go:74] duration metric: took 7.47115ms to wait for pod list to return data ...
	I0914 18:13:36.647490   62554 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:36.650678   62554 default_sa.go:45] found service account: "default"
	I0914 18:13:36.650722   62554 default_sa.go:55] duration metric: took 3.225438ms for default service account to be created ...
	I0914 18:13:36.650733   62554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:36.656461   62554 system_pods.go:86] 9 kube-system pods found
	I0914 18:13:36.656489   62554 system_pods.go:89] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.656495   62554 system_pods.go:89] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.656499   62554 system_pods.go:89] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.656503   62554 system_pods.go:89] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.656507   62554 system_pods.go:89] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.656512   62554 system_pods.go:89] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.656516   62554 system_pods.go:89] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.656522   62554 system_pods.go:89] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.656525   62554 system_pods.go:89] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.656534   62554 system_pods.go:126] duration metric: took 5.795433ms to wait for k8s-apps to be running ...
	I0914 18:13:36.656541   62554 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:36.656586   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:36.673166   62554 system_svc.go:56] duration metric: took 16.609444ms WaitForService to wait for kubelet
	I0914 18:13:36.673205   62554 kubeadm.go:582] duration metric: took 9.822681909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:36.673227   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:36.794984   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:36.795013   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:36.795024   62554 node_conditions.go:105] duration metric: took 121.79122ms to run NodePressure ...
	I0914 18:13:36.795038   62554 start.go:241] waiting for startup goroutines ...
	I0914 18:13:36.795047   62554 start.go:246] waiting for cluster config update ...
	I0914 18:13:36.795060   62554 start.go:255] writing updated cluster config ...
	I0914 18:13:36.795406   62554 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:36.847454   62554 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:36.849605   62554 out.go:177] * Done! kubectl is now configured to use "embed-certs-044534" cluster and "default" namespace by default
	I0914 18:13:33.105197   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.604458   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.989800   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.006371   63448 api_server.go:72] duration metric: took 4m14.310539233s to wait for apiserver process to appear ...
	I0914 18:13:36.006405   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.006446   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:36.006508   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:36.044973   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:36.044992   63448 cri.go:89] found id: ""
	I0914 18:13:36.045000   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:36.045055   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.049371   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:36.049449   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:36.097114   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.097139   63448 cri.go:89] found id: ""
	I0914 18:13:36.097148   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:36.097212   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.102084   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:36.102153   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:36.140640   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.140662   63448 cri.go:89] found id: ""
	I0914 18:13:36.140671   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:36.140728   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.144624   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:36.144696   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:36.179135   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.179156   63448 cri.go:89] found id: ""
	I0914 18:13:36.179163   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:36.179216   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.183050   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:36.183110   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:36.222739   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:36.222758   63448 cri.go:89] found id: ""
	I0914 18:13:36.222765   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:36.222812   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.226715   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:36.226782   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:36.261587   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:36.261610   63448 cri.go:89] found id: ""
	I0914 18:13:36.261617   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:36.261664   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.265541   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:36.265614   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:36.301521   63448 cri.go:89] found id: ""
	I0914 18:13:36.301546   63448 logs.go:276] 0 containers: []
	W0914 18:13:36.301554   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:36.301560   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:36.301622   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:36.335332   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.335355   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.335358   63448 cri.go:89] found id: ""
	I0914 18:13:36.335365   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:36.335415   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.339542   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.343543   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:36.343570   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.384224   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:36.384259   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.428010   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:36.428041   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.469679   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:36.469708   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.507570   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:36.507597   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.543300   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:36.543335   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:36.619060   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:36.619084   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:36.633542   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:36.633572   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:36.741334   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:36.741370   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:37.231208   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:37.231255   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:37.278835   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:37.278863   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:37.320359   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:37.320399   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:37.357940   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:37.357974   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:39.913586   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:13:39.917590   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:13:39.918633   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:39.918653   63448 api_server.go:131] duration metric: took 3.912241678s to wait for apiserver health ...
	I0914 18:13:39.918660   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:39.918682   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:39.918727   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:39.961919   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:39.961947   63448 cri.go:89] found id: ""
	I0914 18:13:39.961956   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:39.962012   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:39.965756   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:39.965838   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:40.008044   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.008066   63448 cri.go:89] found id: ""
	I0914 18:13:40.008074   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:40.008117   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.012505   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:40.012569   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:40.059166   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.059194   63448 cri.go:89] found id: ""
	I0914 18:13:40.059204   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:40.059267   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.063135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:40.063197   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:40.105220   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.105245   63448 cri.go:89] found id: ""
	I0914 18:13:40.105255   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:40.105308   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.109907   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:40.109978   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:40.146307   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.146337   63448 cri.go:89] found id: ""
	I0914 18:13:40.146349   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:40.146396   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.150369   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:40.150436   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:40.185274   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.185301   63448 cri.go:89] found id: ""
	I0914 18:13:40.185312   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:40.185374   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.189425   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:40.189499   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:40.223289   63448 cri.go:89] found id: ""
	I0914 18:13:40.223311   63448 logs.go:276] 0 containers: []
	W0914 18:13:40.223319   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:40.223324   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:40.223369   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:40.257779   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.257805   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.257811   63448 cri.go:89] found id: ""
	I0914 18:13:40.257820   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:40.257880   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.262388   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.266233   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:40.266258   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:38.105234   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.604049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.310145   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:40.310188   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.358651   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:40.358686   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.398107   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:40.398144   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.450540   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:40.450573   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:40.465987   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:40.466013   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:40.573299   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:40.573333   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.618201   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:40.618247   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.671259   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:40.671304   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.708455   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:40.708488   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.746662   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:40.746696   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:41.108968   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:41.109017   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:41.150925   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:41.150968   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:43.725606   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:13:43.725642   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.725650   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.725656   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.725661   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.725665   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.725670   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.725680   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.725687   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.725699   63448 system_pods.go:74] duration metric: took 3.807031642s to wait for pod list to return data ...
	I0914 18:13:43.725710   63448 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:43.728384   63448 default_sa.go:45] found service account: "default"
	I0914 18:13:43.728409   63448 default_sa.go:55] duration metric: took 2.691817ms for default service account to be created ...
	I0914 18:13:43.728417   63448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:43.732884   63448 system_pods.go:86] 8 kube-system pods found
	I0914 18:13:43.732913   63448 system_pods.go:89] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.732918   63448 system_pods.go:89] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.732922   63448 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.732926   63448 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.732931   63448 system_pods.go:89] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.732935   63448 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.732942   63448 system_pods.go:89] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.732947   63448 system_pods.go:89] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.732954   63448 system_pods.go:126] duration metric: took 4.531761ms to wait for k8s-apps to be running ...
	I0914 18:13:43.732960   63448 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:43.733001   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:43.749535   63448 system_svc.go:56] duration metric: took 16.566498ms WaitForService to wait for kubelet
	I0914 18:13:43.749567   63448 kubeadm.go:582] duration metric: took 4m22.053742257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:43.749587   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:43.752493   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:43.752514   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:43.752523   63448 node_conditions.go:105] duration metric: took 2.931821ms to run NodePressure ...
	I0914 18:13:43.752534   63448 start.go:241] waiting for startup goroutines ...
	I0914 18:13:43.752548   63448 start.go:246] waiting for cluster config update ...
	I0914 18:13:43.752560   63448 start.go:255] writing updated cluster config ...
	I0914 18:13:43.752815   63448 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:43.803181   63448 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:43.805150   63448 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-243449" cluster and "default" namespace by default
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.103780   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:45.603666   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:47.603988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:50.104811   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:52.604411   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:55.103339   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:57.103716   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:59.603423   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:00.097180   62207 pod_ready.go:82] duration metric: took 4m0.000345486s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	E0914 18:14:00.097209   62207 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:14:00.097230   62207 pod_ready.go:39] duration metric: took 4m11.039838973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:00.097260   62207 kubeadm.go:597] duration metric: took 4m18.345876583s to restartPrimaryControlPlane
	W0914 18:14:00.097328   62207 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:14:00.097360   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:26.392001   62207 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.294613232s)
	I0914 18:14:26.392082   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:26.410558   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:14:26.421178   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:26.430786   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:26.430808   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:26.430858   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:26.440193   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:26.440253   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:26.449848   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:26.459589   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:26.459651   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:26.469556   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.478722   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:26.478782   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.488694   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:26.498478   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:26.498542   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:26.509455   62207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:26.552295   62207 kubeadm.go:310] W0914 18:14:26.530603    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.552908   62207 kubeadm.go:310] W0914 18:14:26.531307    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.665962   62207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:35.406074   62207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:14:35.406150   62207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:35.406251   62207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:35.406372   62207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:35.406503   62207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:14:35.406611   62207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:35.408167   62207 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:35.408257   62207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:35.408337   62207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:35.408451   62207 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:35.408550   62207 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:35.408655   62207 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:35.408733   62207 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:35.408823   62207 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:35.408916   62207 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:35.409022   62207 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:35.409133   62207 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:35.409176   62207 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:35.409225   62207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:35.409269   62207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:35.409328   62207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:14:35.409374   62207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:35.409440   62207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:35.409507   62207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:35.409633   62207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:35.409734   62207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:35.411984   62207 out.go:235]   - Booting up control plane ...
	I0914 18:14:35.412099   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:35.412212   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:35.412276   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:35.412371   62207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:35.412444   62207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:35.412479   62207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:35.412597   62207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:14:35.412686   62207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:14:35.412737   62207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002422188s
	I0914 18:14:35.412801   62207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:14:35.412863   62207 kubeadm.go:310] [api-check] The API server is healthy after 5.002046359s
	I0914 18:14:35.412986   62207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:14:35.413129   62207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:14:35.413208   62207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:14:35.413427   62207 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-168587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:14:35.413510   62207 kubeadm.go:310] [bootstrap-token] Using token: 2jk8ol.l80z6l7tm2nt4pl7
	I0914 18:14:35.414838   62207 out.go:235]   - Configuring RBAC rules ...
	I0914 18:14:35.414968   62207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:14:35.415069   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:14:35.415291   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:14:35.415482   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:14:35.415615   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:14:35.415725   62207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:14:35.415867   62207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:14:35.415930   62207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:14:35.415990   62207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:14:35.415999   62207 kubeadm.go:310] 
	I0914 18:14:35.416077   62207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:14:35.416086   62207 kubeadm.go:310] 
	I0914 18:14:35.416187   62207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:14:35.416198   62207 kubeadm.go:310] 
	I0914 18:14:35.416232   62207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:14:35.416314   62207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:14:35.416388   62207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:14:35.416397   62207 kubeadm.go:310] 
	I0914 18:14:35.416474   62207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:14:35.416484   62207 kubeadm.go:310] 
	I0914 18:14:35.416525   62207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:14:35.416529   62207 kubeadm.go:310] 
	I0914 18:14:35.416597   62207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:14:35.416701   62207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:14:35.416781   62207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:14:35.416796   62207 kubeadm.go:310] 
	I0914 18:14:35.416899   62207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:14:35.416998   62207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:14:35.417007   62207 kubeadm.go:310] 
	I0914 18:14:35.417125   62207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417247   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:14:35.417272   62207 kubeadm.go:310] 	--control-plane 
	I0914 18:14:35.417276   62207 kubeadm.go:310] 
	I0914 18:14:35.417399   62207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:14:35.417422   62207 kubeadm.go:310] 
	I0914 18:14:35.417530   62207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417686   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:14:35.417705   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:14:35.417713   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:14:35.420023   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:14:35.421095   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:14:35.432619   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:14:35.451720   62207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:14:35.451790   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:35.451836   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-168587 minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=no-preload-168587 minikube.k8s.io/primary=true
	I0914 18:14:35.654681   62207 ops.go:34] apiserver oom_adj: -16
	I0914 18:14:35.654714   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.155376   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.655468   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.155741   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.655416   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.154935   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.655465   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.740860   62207 kubeadm.go:1113] duration metric: took 3.289121705s to wait for elevateKubeSystemPrivileges
	I0914 18:14:38.740912   62207 kubeadm.go:394] duration metric: took 4m57.036377829s to StartCluster
	I0914 18:14:38.740939   62207 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.741029   62207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:14:38.742754   62207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.742977   62207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:14:38.743138   62207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:14:38.743260   62207 addons.go:69] Setting storage-provisioner=true in profile "no-preload-168587"
	I0914 18:14:38.743271   62207 addons.go:69] Setting default-storageclass=true in profile "no-preload-168587"
	I0914 18:14:38.743282   62207 addons.go:234] Setting addon storage-provisioner=true in "no-preload-168587"
	I0914 18:14:38.743290   62207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-168587"
	W0914 18:14:38.743295   62207 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:14:38.743306   62207 addons.go:69] Setting metrics-server=true in profile "no-preload-168587"
	I0914 18:14:38.743329   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743334   62207 addons.go:234] Setting addon metrics-server=true in "no-preload-168587"
	I0914 18:14:38.743362   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0914 18:14:38.743365   62207 addons.go:243] addon metrics-server should already be in state true
	I0914 18:14:38.743442   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743814   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743843   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743821   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.744070   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.744427   62207 out.go:177] * Verifying Kubernetes components...
	I0914 18:14:38.745716   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:14:38.760250   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0914 18:14:38.760329   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0914 18:14:38.760788   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.760810   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.761416   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761438   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761581   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761829   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.761980   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.762333   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.762445   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.762495   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.763295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0914 18:14:38.763767   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.764256   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.764285   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.764616   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.765095   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765131   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.765525   62207 addons.go:234] Setting addon default-storageclass=true in "no-preload-168587"
	W0914 18:14:38.765544   62207 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:14:38.765568   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.765798   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765837   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.782208   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0914 18:14:38.782527   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0914 18:14:38.782564   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0914 18:14:38.782679   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782943   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782973   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.783413   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783433   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783566   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783573   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783585   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783956   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.783964   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784444   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.784482   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.784639   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784666   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.784806   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.786340   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.786797   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.788188   62207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:14:38.788195   62207 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:14:38.789239   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:14:38.789254   62207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:14:38.789273   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.789338   62207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:38.789347   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:14:38.789358   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.792968   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793521   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793853   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.793894   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794037   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794097   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.794107   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794258   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794351   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794499   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794531   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794635   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794716   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.794777   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.827254   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 18:14:38.827852   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.828434   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.828460   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.828837   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.829088   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.830820   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.831031   62207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:38.831048   62207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:14:38.831067   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.833822   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834242   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.834282   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834453   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.834641   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.834794   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.834963   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.920627   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:14:38.941951   62207 node_ready.go:35] waiting up to 6m0s for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973102   62207 node_ready.go:49] node "no-preload-168587" has status "Ready":"True"
	I0914 18:14:38.973124   62207 node_ready.go:38] duration metric: took 31.146661ms for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973132   62207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:38.989712   62207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:39.018196   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:14:39.018223   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:14:39.045691   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:39.066249   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:14:39.066277   62207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:14:39.073017   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:39.118360   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.118385   62207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:14:39.195268   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.874924   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.874953   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.874950   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875004   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875398   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875406   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875457   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875466   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875476   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875406   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875430   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875598   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875609   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875631   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875914   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875916   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875934   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875939   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875959   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875966   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.929860   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.929881   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.930191   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.930211   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.139888   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.139918   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140256   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140273   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140282   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.140289   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140608   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140620   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:40.140630   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140646   62207 addons.go:475] Verifying addon metrics-server=true in "no-preload-168587"
	I0914 18:14:40.142461   62207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:14:40.143818   62207 addons.go:510] duration metric: took 1.400695696s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:14:40.996599   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:43.498584   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:45.995938   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:45.995971   62207 pod_ready.go:82] duration metric: took 7.006220602s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:45.995984   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000589   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.000609   62207 pod_ready.go:82] duration metric: took 4.618617ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000620   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004865   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.004886   62207 pod_ready.go:82] duration metric: took 4.259787ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004895   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009225   62207 pod_ready.go:93] pod "kube-proxy-xdj6b" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.009243   62207 pod_ready.go:82] duration metric: took 4.343161ms for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009250   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013312   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.013330   62207 pod_ready.go:82] duration metric: took 4.073817ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013337   62207 pod_ready.go:39] duration metric: took 7.040196066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:46.013358   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:14:46.013403   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:14:46.029881   62207 api_server.go:72] duration metric: took 7.286871398s to wait for apiserver process to appear ...
	I0914 18:14:46.029912   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:14:46.029937   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:14:46.034236   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:14:46.035287   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:14:46.035305   62207 api_server.go:131] duration metric: took 5.385499ms to wait for apiserver health ...
	I0914 18:14:46.035314   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:14:46.196765   62207 system_pods.go:59] 9 kube-system pods found
	I0914 18:14:46.196796   62207 system_pods.go:61] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196804   62207 system_pods.go:61] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196810   62207 system_pods.go:61] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.196816   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.196821   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.196824   62207 system_pods.go:61] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.196827   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.196832   62207 system_pods.go:61] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.196835   62207 system_pods.go:61] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.196842   62207 system_pods.go:74] duration metric: took 161.510322ms to wait for pod list to return data ...
	I0914 18:14:46.196853   62207 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:14:46.394399   62207 default_sa.go:45] found service account: "default"
	I0914 18:14:46.394428   62207 default_sa.go:55] duration metric: took 197.566762ms for default service account to be created ...
	I0914 18:14:46.394443   62207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:14:46.596421   62207 system_pods.go:86] 9 kube-system pods found
	I0914 18:14:46.596454   62207 system_pods.go:89] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596462   62207 system_pods.go:89] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596468   62207 system_pods.go:89] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.596473   62207 system_pods.go:89] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.596477   62207 system_pods.go:89] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.596480   62207 system_pods.go:89] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.596483   62207 system_pods.go:89] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.596502   62207 system_pods.go:89] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.596509   62207 system_pods.go:89] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.596517   62207 system_pods.go:126] duration metric: took 202.067078ms to wait for k8s-apps to be running ...
	I0914 18:14:46.596527   62207 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:14:46.596571   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:46.611796   62207 system_svc.go:56] duration metric: took 15.259464ms WaitForService to wait for kubelet
	I0914 18:14:46.611837   62207 kubeadm.go:582] duration metric: took 7.868833105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:14:46.611858   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:14:46.794731   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:14:46.794758   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:14:46.794767   62207 node_conditions.go:105] duration metric: took 182.903835ms to run NodePressure ...
	I0914 18:14:46.794777   62207 start.go:241] waiting for startup goroutines ...
	I0914 18:14:46.794783   62207 start.go:246] waiting for cluster config update ...
	I0914 18:14:46.794793   62207 start.go:255] writing updated cluster config ...
	I0914 18:14:46.795051   62207 ssh_runner.go:195] Run: rm -f paused
	I0914 18:14:46.845803   62207 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:14:46.847399   62207 out.go:177] * Done! kubectl is now configured to use "no-preload-168587" cluster and "default" namespace by default
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.893068594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726337812893039682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=797efc8c-1883-4d19-9582-f3e7597d7924 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.893644984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd8bff99-54bd-49dd-bed3-4cea68939a39 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.893720432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd8bff99-54bd-49dd-bed3-4cea68939a39 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.893757799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cd8bff99-54bd-49dd-bed3-4cea68939a39 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.926393372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5311bc72-e802-4e30-9555-f79934ea0ab5 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.926499865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5311bc72-e802-4e30-9555-f79934ea0ab5 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.927853567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6851a2f9-6ec6-4240-a0a0-c909ab6e38d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.928269703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726337812928246220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6851a2f9-6ec6-4240-a0a0-c909ab6e38d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.928850408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9efd8f9-8819-4a9e-9d96-230a48c60d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.928917648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9efd8f9-8819-4a9e-9d96-230a48c60d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.928988828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c9efd8f9-8819-4a9e-9d96-230a48c60d14 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.961701575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67e75cd0-80f9-4b38-93e1-8686a66995a8 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.961793804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67e75cd0-80f9-4b38-93e1-8686a66995a8 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.963275332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=984f8508-9da4-4b8d-8dbc-863a2f09c367 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.963670591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726337812963642965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=984f8508-9da4-4b8d-8dbc-863a2f09c367 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.964227814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bedda112-4086-4597-84ff-eee5260492c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.964299470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bedda112-4086-4597-84ff-eee5260492c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.964337635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bedda112-4086-4597-84ff-eee5260492c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.999140668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=571e17b7-a13c-4b39-b3ba-4e840318c006 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:16:52 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:52.999240262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=571e17b7-a13c-4b39-b3ba-4e840318c006 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:16:53 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:53.000518756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7549fb1a-a61d-4cb3-932b-1dbfd409a216 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:53 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:53.001006943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726337813000923468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7549fb1a-a61d-4cb3-932b-1dbfd409a216 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:16:53 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:53.001768468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3032d809-f99a-4b98-9ba4-8ce8454c5f35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:53 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:53.001844682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3032d809-f99a-4b98-9ba4-8ce8454c5f35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:16:53 old-k8s-version-556121 crio[630]: time="2024-09-14 18:16:53.001897927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3032d809-f99a-4b98-9ba4-8ce8454c5f35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep14 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818277] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.926515] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.580247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.280362] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.069665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058885] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.193036] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.156845] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.249799] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.598174] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.066263] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.657757] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[Sep14 18:09] kauditd_printk_skb: 46 callbacks suppressed
	[Sep14 18:12] systemd-fstab-generator[5028]: Ignoring "noauto" option for root device
	[Sep14 18:14] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.068697] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:16:53 up 8 min,  0 users,  load average: 0.14, 0.09, 0.04
	Linux old-k8s-version-556121 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc00075fe60)
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: goroutine 150 [select]:
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007ebef0, 0x4f0ac20, 0xc00056bc20, 0x1, 0xc00009e0c0)
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0006c8c40, 0xc00009e0c0)
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000737e80, 0xc00089fce0)
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 14 18:16:50 old-k8s-version-556121 kubelet[5497]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 14 18:16:50 old-k8s-version-556121 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 14 18:16:50 old-k8s-version-556121 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 14 18:16:50 old-k8s-version-556121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 14 18:16:50 old-k8s-version-556121 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 14 18:16:50 old-k8s-version-556121 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 14 18:16:51 old-k8s-version-556121 kubelet[5561]: I0914 18:16:51.014107    5561 server.go:416] Version: v1.20.0
	Sep 14 18:16:51 old-k8s-version-556121 kubelet[5561]: I0914 18:16:51.014462    5561 server.go:837] Client rotation is on, will bootstrap in background
	Sep 14 18:16:51 old-k8s-version-556121 kubelet[5561]: I0914 18:16:51.017210    5561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 14 18:16:51 old-k8s-version-556121 kubelet[5561]: W0914 18:16:51.018258    5561 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 14 18:16:51 old-k8s-version-556121 kubelet[5561]: I0914 18:16:51.018388    5561 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (232.503209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-556121" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (709.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449: exit status 3 (3.167621264s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:06:31.042521   63338 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host
	E0914 18:06:31.042575   63338 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-243449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-243449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152475143s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-243449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449: exit status 3 (3.063577348s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:06:40.258702   63418 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host
	E0914 18:06:40.258726   63418 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-243449" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-044534 -n embed-certs-044534
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-14 18:22:37.392388003 +0000 UTC m=+5931.914121975
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-044534 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-044534 logs -n 25: (2.202985579s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-319416                              | stopped-upgrade-319416       | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:06:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:06:40.299903   63448 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:40.300039   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300049   63448 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:40.300054   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300240   63448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:06:40.300801   63448 out.go:352] Setting JSON to false
	I0914 18:06:40.301779   63448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6544,"bootTime":1726330656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:06:40.301879   63448 start.go:139] virtualization: kvm guest
	I0914 18:06:40.303963   63448 out.go:177] * [default-k8s-diff-port-243449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:06:40.305394   63448 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:06:40.305429   63448 notify.go:220] Checking for updates...
	I0914 18:06:40.308148   63448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:06:40.309226   63448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:06:40.310360   63448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:06:40.311509   63448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:06:40.312543   63448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:06:40.314418   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:06:40.315063   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.315154   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.330033   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0914 18:06:40.330502   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.331014   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.331035   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.331372   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.331519   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.331729   63448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:06:40.332043   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.332089   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.346598   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0914 18:06:40.347021   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.347501   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.347536   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.347863   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.348042   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.380416   63448 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:06:40.381578   63448 start.go:297] selected driver: kvm2
	I0914 18:06:40.381589   63448 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.381693   63448 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:06:40.382390   63448 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.382478   63448 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:06:40.397521   63448 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:06:40.397921   63448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:06:40.397959   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:06:40.398002   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:06:40.398040   63448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.398145   63448 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.399920   63448 out.go:177] * Starting "default-k8s-diff-port-243449" primary control-plane node in "default-k8s-diff-port-243449" cluster
	I0914 18:06:39.170425   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:40.400913   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:06:40.400954   63448 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:06:40.400966   63448 cache.go:56] Caching tarball of preloaded images
	I0914 18:06:40.401038   63448 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:06:40.401055   63448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:06:40.401185   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:06:40.401421   63448 start.go:360] acquireMachinesLock for default-k8s-diff-port-243449: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:06:45.250426   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:48.322531   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:54.402441   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:57.474440   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:03.554541   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:06.626472   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:12.706430   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:15.778448   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:21.858453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:24.930473   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:31.010432   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:34.082423   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:40.162417   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:43.234501   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:49.314533   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:52.386453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:58.466444   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:01.538476   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:04.546206   62554 start.go:364] duration metric: took 3m59.524513317s to acquireMachinesLock for "embed-certs-044534"
	I0914 18:08:04.546263   62554 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:04.546275   62554 fix.go:54] fixHost starting: 
	I0914 18:08:04.546585   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:04.546636   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:04.562182   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0914 18:08:04.562704   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:04.563264   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:08:04.563300   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:04.563714   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:04.563947   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:04.564131   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:08:04.566043   62554 fix.go:112] recreateIfNeeded on embed-certs-044534: state=Stopped err=<nil>
	I0914 18:08:04.566073   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	W0914 18:08:04.566289   62554 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:04.567993   62554 out.go:177] * Restarting existing kvm2 VM for "embed-certs-044534" ...
	I0914 18:08:04.570182   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Start
	I0914 18:08:04.570431   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring networks are active...
	I0914 18:08:04.571374   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network default is active
	I0914 18:08:04.571748   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network mk-embed-certs-044534 is active
	I0914 18:08:04.572124   62554 main.go:141] libmachine: (embed-certs-044534) Getting domain xml...
	I0914 18:08:04.572852   62554 main.go:141] libmachine: (embed-certs-044534) Creating domain...
	I0914 18:08:04.540924   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:04.540957   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541310   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:08:04.541335   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541586   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:08:04.546055   62207 machine.go:96] duration metric: took 4m34.63489942s to provisionDockerMachine
	I0914 18:08:04.546096   62207 fix.go:56] duration metric: took 4m34.662932355s for fixHost
	I0914 18:08:04.546102   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 4m34.66297244s
	W0914 18:08:04.546122   62207 start.go:714] error starting host: provision: host is not running
	W0914 18:08:04.546220   62207 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 18:08:04.546231   62207 start.go:729] Will try again in 5 seconds ...
	I0914 18:08:05.812076   62554 main.go:141] libmachine: (embed-certs-044534) Waiting to get IP...
	I0914 18:08:05.812955   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:05.813302   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:05.813380   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:05.813279   63779 retry.go:31] will retry after 298.8389ms: waiting for machine to come up
	I0914 18:08:06.114130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.114575   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.114604   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.114530   63779 retry.go:31] will retry after 359.694721ms: waiting for machine to come up
	I0914 18:08:06.476183   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.476801   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.476828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.476745   63779 retry.go:31] will retry after 425.650219ms: waiting for machine to come up
	I0914 18:08:06.904358   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.904794   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.904816   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.904749   63779 retry.go:31] will retry after 433.157325ms: waiting for machine to come up
	I0914 18:08:07.339139   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.339578   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.339602   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.339512   63779 retry.go:31] will retry after 547.817102ms: waiting for machine to come up
	I0914 18:08:07.889390   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.889888   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.889993   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.889820   63779 retry.go:31] will retry after 603.749753ms: waiting for machine to come up
	I0914 18:08:08.495673   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:08.496047   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:08.496076   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:08.495995   63779 retry.go:31] will retry after 831.027535ms: waiting for machine to come up
	I0914 18:08:09.329209   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:09.329622   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:09.329643   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:09.329591   63779 retry.go:31] will retry after 1.429850518s: waiting for machine to come up
	I0914 18:08:09.548738   62207 start.go:360] acquireMachinesLock for no-preload-168587: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:10.761510   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:10.761884   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:10.761915   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:10.761839   63779 retry.go:31] will retry after 1.146619754s: waiting for machine to come up
	I0914 18:08:11.910130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:11.910542   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:11.910568   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:11.910500   63779 retry.go:31] will retry after 1.582382319s: waiting for machine to come up
	I0914 18:08:13.495352   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:13.495852   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:13.495872   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:13.495808   63779 retry.go:31] will retry after 2.117717335s: waiting for machine to come up
	I0914 18:08:15.615461   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:15.615896   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:15.615918   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:15.615846   63779 retry.go:31] will retry after 3.071486865s: waiting for machine to come up
	I0914 18:08:18.691109   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:18.691572   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:18.691605   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:18.691513   63779 retry.go:31] will retry after 4.250544955s: waiting for machine to come up
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:22.946247   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946662   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has current primary IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946687   62554 main.go:141] libmachine: (embed-certs-044534) Found IP for machine: 192.168.50.126
	I0914 18:08:22.946700   62554 main.go:141] libmachine: (embed-certs-044534) Reserving static IP address...
	I0914 18:08:22.947052   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.947068   62554 main.go:141] libmachine: (embed-certs-044534) Reserved static IP address: 192.168.50.126
	I0914 18:08:22.947080   62554 main.go:141] libmachine: (embed-certs-044534) DBG | skip adding static IP to network mk-embed-certs-044534 - found existing host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"}
	I0914 18:08:22.947093   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Getting to WaitForSSH function...
	I0914 18:08:22.947108   62554 main.go:141] libmachine: (embed-certs-044534) Waiting for SSH to be available...
	I0914 18:08:22.949354   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949623   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.949645   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949798   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH client type: external
	I0914 18:08:22.949822   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa (-rw-------)
	I0914 18:08:22.949886   62554 main.go:141] libmachine: (embed-certs-044534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:22.949911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | About to run SSH command:
	I0914 18:08:22.949926   62554 main.go:141] libmachine: (embed-certs-044534) DBG | exit 0
	I0914 18:08:23.074248   62554 main.go:141] libmachine: (embed-certs-044534) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:23.074559   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetConfigRaw
	I0914 18:08:23.075190   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.077682   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078007   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.078040   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078309   62554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:08:23.078494   62554 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:23.078510   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.078723   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.081444   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.081846   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.081891   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.082026   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.082209   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082398   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082573   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.082739   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.082961   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.082984   62554 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:23.186143   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:23.186193   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186424   62554 buildroot.go:166] provisioning hostname "embed-certs-044534"
	I0914 18:08:23.186447   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186622   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.189085   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189453   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.189482   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189615   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.189802   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190032   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190168   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.190422   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.190587   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.190601   62554 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-044534 && echo "embed-certs-044534" | sudo tee /etc/hostname
	I0914 18:08:23.307484   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-044534
	
	I0914 18:08:23.307512   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.310220   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.310664   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310764   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.310969   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311206   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311438   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.311594   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.311802   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.311820   62554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:23.422574   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:23.422603   62554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:23.422623   62554 buildroot.go:174] setting up certificates
	I0914 18:08:23.422634   62554 provision.go:84] configureAuth start
	I0914 18:08:23.422643   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.422905   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.426201   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426557   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.426584   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426745   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.428607   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.428985   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.429016   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.429138   62554 provision.go:143] copyHostCerts
	I0914 18:08:23.429198   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:23.429211   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:23.429295   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:23.429437   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:23.429452   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:23.429498   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:23.429592   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:23.429600   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:23.429626   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:23.429680   62554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044534 san=[127.0.0.1 192.168.50.126 embed-certs-044534 localhost minikube]
	I0914 18:08:23.538590   62554 provision.go:177] copyRemoteCerts
	I0914 18:08:23.538662   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:23.538689   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.541366   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541723   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.541746   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.542120   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.542303   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.542413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.623698   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:23.647378   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 18:08:23.671327   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:08:23.694570   62554 provision.go:87] duration metric: took 271.923979ms to configureAuth
	I0914 18:08:23.694598   62554 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:23.694779   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:08:23.694868   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.697467   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.697828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.697862   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.698042   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.698249   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698421   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698571   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.698692   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.698945   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.698963   62554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:23.911661   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:23.911697   62554 machine.go:96] duration metric: took 833.189197ms to provisionDockerMachine
	I0914 18:08:23.911712   62554 start.go:293] postStartSetup for "embed-certs-044534" (driver="kvm2")
	I0914 18:08:23.911726   62554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:23.911751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.912134   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:23.912169   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.914579   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.914974   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.915011   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.915121   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.915322   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.915582   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.915710   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.996910   62554 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:24.000900   62554 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:24.000926   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:24.000998   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:24.001099   62554 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:24.001222   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:24.010496   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:24.033377   62554 start.go:296] duration metric: took 121.65145ms for postStartSetup
	I0914 18:08:24.033414   62554 fix.go:56] duration metric: took 19.487140172s for fixHost
	I0914 18:08:24.033434   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.036188   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036494   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.036524   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036672   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.036886   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037082   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037216   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.037375   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:24.037542   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:24.037554   62554 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:24.142822   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337304.118879777
	
	I0914 18:08:24.142851   62554 fix.go:216] guest clock: 1726337304.118879777
	I0914 18:08:24.142862   62554 fix.go:229] Guest: 2024-09-14 18:08:24.118879777 +0000 UTC Remote: 2024-09-14 18:08:24.03341777 +0000 UTC m=+259.160200473 (delta=85.462007ms)
	I0914 18:08:24.142936   62554 fix.go:200] guest clock delta is within tolerance: 85.462007ms
	I0914 18:08:24.142960   62554 start.go:83] releasing machines lock for "embed-certs-044534", held for 19.596720856s
	I0914 18:08:24.142992   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.143262   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:24.146122   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146501   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.146537   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146711   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147204   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147430   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147532   62554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:24.147589   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.147813   62554 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:24.147839   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.150691   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.150736   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151012   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151056   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151149   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151179   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151431   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151468   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151586   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151772   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151944   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.152034   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.256821   62554 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:24.263249   62554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:24.411996   62554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:24.418685   62554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:24.418759   62554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:24.434541   62554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:24.434569   62554 start.go:495] detecting cgroup driver to use...
	I0914 18:08:24.434655   62554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:24.452550   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:24.467548   62554 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:24.467602   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:24.482556   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:24.497198   62554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:24.625300   62554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:24.805163   62554 docker.go:233] disabling docker service ...
	I0914 18:08:24.805248   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:24.821164   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:24.834886   62554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:24.963694   62554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:25.081720   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:25.097176   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:25.116611   62554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:08:25.116677   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.129500   62554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:25.129586   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.140281   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.150925   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.166139   62554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:25.177340   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.187662   62554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.207019   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.217207   62554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:25.226988   62554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:25.227065   62554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:25.248357   62554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:25.258467   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:25.375359   62554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:25.470389   62554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:25.470470   62554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:25.475526   62554 start.go:563] Will wait 60s for crictl version
	I0914 18:08:25.475589   62554 ssh_runner.go:195] Run: which crictl
	I0914 18:08:25.479131   62554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:25.530371   62554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:25.530461   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.557035   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.586883   62554 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:08:25.588117   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:25.591212   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591600   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:25.591628   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591816   62554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:25.595706   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:25.608009   62554 kubeadm.go:883] updating cluster {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:25.608141   62554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:08:25.608194   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:25.643422   62554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:08:25.643515   62554 ssh_runner.go:195] Run: which lz4
	I0914 18:08:25.647471   62554 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:25.651573   62554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:25.651607   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:08:26.985357   62554 crio.go:462] duration metric: took 1.337911722s to copy over tarball
	I0914 18:08:26.985437   62554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:29.111492   62554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126022567s)
	I0914 18:08:29.111524   62554 crio.go:469] duration metric: took 2.12613646s to extract the tarball
	I0914 18:08:29.111533   62554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:29.148426   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:29.190595   62554 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:08:29.190620   62554 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:08:29.190628   62554 kubeadm.go:934] updating node { 192.168.50.126 8443 v1.31.1 crio true true} ...
	I0914 18:08:29.190751   62554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-044534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:29.190823   62554 ssh_runner.go:195] Run: crio config
	I0914 18:08:29.234785   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:29.234808   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:29.234818   62554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:29.234871   62554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.126 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-044534 NodeName:embed-certs-044534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:08:29.234996   62554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-044534"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:29.235054   62554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:08:29.244554   62554 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:29.244631   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:29.253622   62554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 18:08:29.270046   62554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:29.285751   62554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 18:08:29.303567   62554 ssh_runner.go:195] Run: grep 192.168.50.126	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:29.307335   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:29.319510   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:29.442649   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:29.459657   62554 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534 for IP: 192.168.50.126
	I0914 18:08:29.459687   62554 certs.go:194] generating shared ca certs ...
	I0914 18:08:29.459709   62554 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:29.459908   62554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:29.459976   62554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:29.459995   62554 certs.go:256] generating profile certs ...
	I0914 18:08:29.460166   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/client.key
	I0914 18:08:29.460247   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key.15c978c5
	I0914 18:08:29.460301   62554 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key
	I0914 18:08:29.460447   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:29.460491   62554 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:29.460505   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:29.460537   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:29.460581   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:29.460605   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:29.460649   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:29.461415   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:29.501260   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:29.531940   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:29.577959   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:29.604067   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 18:08:29.635335   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:08:29.658841   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:29.684149   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:08:29.709354   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:29.733812   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:29.758427   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:29.783599   62554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:29.802188   62554 ssh_runner.go:195] Run: openssl version
	I0914 18:08:29.808277   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:29.821167   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825911   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825978   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.832160   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:29.844395   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:29.856943   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861671   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861730   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.867506   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:29.878004   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:29.890322   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.894985   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.895053   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.900837   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:29.911102   62554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:29.915546   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:29.921470   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:29.927238   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:29.933122   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:29.938829   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:29.944811   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:29.950679   62554 kubeadm.go:392] StartCluster: {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:29.950762   62554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:29.950866   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:29.987553   62554 cri.go:89] found id: ""
	I0914 18:08:29.987626   62554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:29.998690   62554 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:29.998713   62554 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:29.998765   62554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:30.009411   62554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:30.010804   62554 kubeconfig.go:125] found "embed-certs-044534" server: "https://192.168.50.126:8443"
	I0914 18:08:30.013635   62554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:30.023903   62554 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.126
	I0914 18:08:30.023937   62554 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:30.023951   62554 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:30.024017   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:30.067767   62554 cri.go:89] found id: ""
	I0914 18:08:30.067842   62554 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:30.087326   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:30.098162   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:30.098180   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:30.098218   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:30.108239   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:30.108296   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:30.118913   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:30.129091   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:30.129172   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:30.139658   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.148838   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:30.148923   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.158386   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:30.167282   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:30.167354   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:30.176443   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:30.185476   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:30.310603   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.243123   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.457657   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.531992   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.625580   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:31.625683   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.125744   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.626056   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.126817   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.146478   62554 api_server.go:72] duration metric: took 1.520896575s to wait for apiserver process to appear ...
	I0914 18:08:33.146517   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:08:33.146543   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:33.147106   62554 api_server.go:269] stopped: https://192.168.50.126:8443/healthz: Get "https://192.168.50.126:8443/healthz": dial tcp 192.168.50.126:8443: connect: connection refused
	I0914 18:08:33.646672   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:35.687169   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.687200   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:35.687221   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:35.737352   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.737410   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:36.146777   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.151156   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.151185   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:36.647380   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.655444   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.655477   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:37.146971   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:37.151233   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:08:37.160642   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:08:37.160671   62554 api_server.go:131] duration metric: took 4.014146932s to wait for apiserver health ...
	I0914 18:08:37.160679   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:37.160686   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:37.162836   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:08:37.164378   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:08:37.183377   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:08:37.210701   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:08:37.222258   62554 system_pods.go:59] 8 kube-system pods found
	I0914 18:08:37.222304   62554 system_pods.go:61] "coredns-7c65d6cfc9-59dm5" [55e67ff8-cf54-41fc-af46-160085787f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:08:37.222316   62554 system_pods.go:61] "etcd-embed-certs-044534" [932ca8e3-a777-4bb3-bdc2-6c1f1d293d4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:08:37.222331   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [f71e6720-c32c-426f-8620-b56eadf5e33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:08:37.222351   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [b93c261f-303f-43bb-8b33-4f97dc287809] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:08:37.222359   62554 system_pods.go:61] "kube-proxy-nkdth" [3762b613-c50f-4ba9-af52-371b139f9b6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:08:37.222368   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [65da2ca2-0405-4726-a2dc-dd13519c336a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:08:37.222377   62554 system_pods.go:61] "metrics-server-6867b74b74-stwfz" [ccc73057-4710-4e41-b643-d793d9b01175] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:08:37.222393   62554 system_pods.go:61] "storage-provisioner" [660fd3e3-ce57-4275-9fe1-bcceba75d8a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:08:37.222405   62554 system_pods.go:74] duration metric: took 11.676128ms to wait for pod list to return data ...
	I0914 18:08:37.222420   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:08:37.227047   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:08:37.227087   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:08:37.227104   62554 node_conditions.go:105] duration metric: took 4.678826ms to run NodePressure ...
	I0914 18:08:37.227124   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:37.510868   62554 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515839   62554 kubeadm.go:739] kubelet initialised
	I0914 18:08:37.515863   62554 kubeadm.go:740] duration metric: took 4.967389ms waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515871   62554 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:08:37.520412   62554 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:39.528469   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:42.028017   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:44.526502   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:45.103061   63448 start.go:364] duration metric: took 2m4.701591278s to acquireMachinesLock for "default-k8s-diff-port-243449"
	I0914 18:08:45.103116   63448 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:45.103124   63448 fix.go:54] fixHost starting: 
	I0914 18:08:45.103555   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:45.103626   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:45.120496   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0914 18:08:45.121098   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:45.122023   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:08:45.122050   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:45.122440   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:45.122631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:08:45.122792   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:08:45.124473   63448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-243449: state=Stopped err=<nil>
	I0914 18:08:45.124500   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	W0914 18:08:45.124633   63448 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:45.126255   63448 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-243449" ...
	I0914 18:08:45.127296   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Start
	I0914 18:08:45.127469   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring networks are active...
	I0914 18:08:45.128415   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network default is active
	I0914 18:08:45.128823   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network mk-default-k8s-diff-port-243449 is active
	I0914 18:08:45.129257   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Getting domain xml...
	I0914 18:08:45.130055   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Creating domain...
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.027201   62554 pod_ready.go:93] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:46.027223   62554 pod_ready.go:82] duration metric: took 8.506784658s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.027232   62554 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043468   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.043499   62554 pod_ready.go:82] duration metric: took 1.016259668s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043513   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050825   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.050853   62554 pod_ready.go:82] duration metric: took 7.332421ms for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050869   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561389   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.561419   62554 pod_ready.go:82] duration metric: took 510.541663ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561434   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568265   62554 pod_ready.go:93] pod "kube-proxy-nkdth" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.568298   62554 pod_ready.go:82] duration metric: took 6.854878ms for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568312   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575898   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:48.575924   62554 pod_ready.go:82] duration metric: took 1.00760412s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575934   62554 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.464001   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting to get IP...
	I0914 18:08:46.465004   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465408   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465512   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.465391   64066 retry.go:31] will retry after 283.185405ms: waiting for machine to come up
	I0914 18:08:46.751155   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751669   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751697   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.751622   64066 retry.go:31] will retry after 307.273139ms: waiting for machine to come up
	I0914 18:08:47.060812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061855   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061889   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.061749   64066 retry.go:31] will retry after 420.077307ms: waiting for machine to come up
	I0914 18:08:47.483188   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483611   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483656   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.483567   64066 retry.go:31] will retry after 562.15435ms: waiting for machine to come up
	I0914 18:08:48.047428   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047971   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.047867   64066 retry.go:31] will retry after 744.523152ms: waiting for machine to come up
	I0914 18:08:48.793959   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794449   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794492   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.794393   64066 retry.go:31] will retry after 813.631617ms: waiting for machine to come up
	I0914 18:08:49.609483   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:49.609904   64066 retry.go:31] will retry after 941.244861ms: waiting for machine to come up
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:50.583860   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:52.826559   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:50.553210   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553735   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553764   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:50.553671   64066 retry.go:31] will retry after 1.107692241s: waiting for machine to come up
	I0914 18:08:51.663218   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663723   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663753   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:51.663681   64066 retry.go:31] will retry after 1.357435642s: waiting for machine to come up
	I0914 18:08:53.022246   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022695   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022726   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:53.022628   64066 retry.go:31] will retry after 2.045434586s: waiting for machine to come up
	I0914 18:08:55.070946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071420   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:55.071362   64066 retry.go:31] will retry after 2.084823885s: waiting for machine to come up
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.084673   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.582709   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:59.583340   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.158301   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158679   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158705   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:57.158640   64066 retry.go:31] will retry after 2.492994369s: waiting for machine to come up
	I0914 18:08:59.654137   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654550   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654585   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:59.654496   64066 retry.go:31] will retry after 3.393327124s: waiting for machine to come up
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.466791   62207 start.go:364] duration metric: took 54.917996405s to acquireMachinesLock for "no-preload-168587"
	I0914 18:09:04.466845   62207 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:09:04.466863   62207 fix.go:54] fixHost starting: 
	I0914 18:09:04.467265   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:04.467303   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:04.485295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 18:09:04.485680   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:04.486195   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:09:04.486221   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:04.486625   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:04.486825   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:04.486985   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:09:04.488546   62207 fix.go:112] recreateIfNeeded on no-preload-168587: state=Stopped err=<nil>
	I0914 18:09:04.488584   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	W0914 18:09:04.488749   62207 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:09:04.491638   62207 out.go:177] * Restarting existing kvm2 VM for "no-preload-168587" ...
	I0914 18:09:02.082684   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:04.582135   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:03.051442   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051882   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has current primary IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051904   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Found IP for machine: 192.168.61.38
	I0914 18:09:03.051946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserving static IP address...
	I0914 18:09:03.052245   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.052269   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | skip adding static IP to network mk-default-k8s-diff-port-243449 - found existing host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"}
	I0914 18:09:03.052280   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserved static IP address: 192.168.61.38
	I0914 18:09:03.052289   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for SSH to be available...
	I0914 18:09:03.052306   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Getting to WaitForSSH function...
	I0914 18:09:03.054154   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054555   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.054596   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054745   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH client type: external
	I0914 18:09:03.054777   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa (-rw-------)
	I0914 18:09:03.054813   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:03.054828   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | About to run SSH command:
	I0914 18:09:03.054841   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | exit 0
	I0914 18:09:03.178065   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:03.178576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetConfigRaw
	I0914 18:09:03.179198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.181829   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182220   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.182242   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182541   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:09:03.182773   63448 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:03.182796   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:03.182992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.185635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186027   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.186056   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186213   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.186416   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186602   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186756   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.186882   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.187123   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.187137   63448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:03.290288   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:03.290332   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290592   63448 buildroot.go:166] provisioning hostname "default-k8s-diff-port-243449"
	I0914 18:09:03.290620   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290779   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.293587   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.293981   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.294012   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.294120   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.294307   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.294708   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.294926   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.294944   63448 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-243449 && echo "default-k8s-diff-port-243449" | sudo tee /etc/hostname
	I0914 18:09:03.418148   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-243449
	
	I0914 18:09:03.418198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.421059   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421501   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.421536   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421733   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.421925   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422075   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.422394   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.422581   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.422609   63448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-243449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-243449/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-243449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:03.538785   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:03.538812   63448 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:03.538851   63448 buildroot.go:174] setting up certificates
	I0914 18:09:03.538866   63448 provision.go:84] configureAuth start
	I0914 18:09:03.538875   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.539230   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.541811   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542129   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.542183   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542393   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.544635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.544933   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.544969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.545099   63448 provision.go:143] copyHostCerts
	I0914 18:09:03.545156   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:03.545167   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:03.545239   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:03.545362   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:03.545374   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:03.545410   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:03.545489   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:03.545498   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:03.545533   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:03.545619   63448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-243449 san=[127.0.0.1 192.168.61.38 default-k8s-diff-port-243449 localhost minikube]
	I0914 18:09:03.858341   63448 provision.go:177] copyRemoteCerts
	I0914 18:09:03.858415   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:03.858453   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.861376   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.861687   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861863   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.862062   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.862231   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.862359   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:03.944043   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:03.968175   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 18:09:03.990621   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:09:04.012163   63448 provision.go:87] duration metric: took 473.28607ms to configureAuth
	I0914 18:09:04.012190   63448 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:04.012364   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:04.012431   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.015021   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015505   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.015553   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015693   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.015866   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016035   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016157   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.016277   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.016479   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.016511   63448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:04.234672   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:04.234697   63448 machine.go:96] duration metric: took 1.051909541s to provisionDockerMachine
	I0914 18:09:04.234710   63448 start.go:293] postStartSetup for "default-k8s-diff-port-243449" (driver="kvm2")
	I0914 18:09:04.234721   63448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:04.234766   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.235108   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:04.235139   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.237583   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.237964   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.237997   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.238237   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.238491   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.238667   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.238798   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.320785   63448 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:04.324837   63448 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:04.324863   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:04.324920   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:04.325001   63448 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:04.325091   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:04.334235   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:04.357310   63448 start.go:296] duration metric: took 122.582935ms for postStartSetup
	I0914 18:09:04.357352   63448 fix.go:56] duration metric: took 19.25422843s for fixHost
	I0914 18:09:04.357373   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.360190   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360574   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.360601   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360774   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.360973   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361163   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361291   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.361479   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.361658   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.361667   63448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:04.466610   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337344.436836920
	
	I0914 18:09:04.466654   63448 fix.go:216] guest clock: 1726337344.436836920
	I0914 18:09:04.466665   63448 fix.go:229] Guest: 2024-09-14 18:09:04.43683692 +0000 UTC Remote: 2024-09-14 18:09:04.357356624 +0000 UTC m=+144.091633354 (delta=79.480296ms)
	I0914 18:09:04.466691   63448 fix.go:200] guest clock delta is within tolerance: 79.480296ms
	I0914 18:09:04.466702   63448 start.go:83] releasing machines lock for "default-k8s-diff-port-243449", held for 19.363604776s
	I0914 18:09:04.466737   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.466992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:04.469873   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470148   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.470198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470364   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.470877   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471098   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471215   63448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:04.471270   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.471322   63448 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:04.471346   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.474023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474144   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474374   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474471   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474616   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474637   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.474816   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474996   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474987   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.475136   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.475274   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.587233   63448 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:04.593065   63448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:04.738721   63448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:04.745472   63448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:04.745539   63448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:04.765742   63448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:04.765806   63448 start.go:495] detecting cgroup driver to use...
	I0914 18:09:04.765909   63448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:04.782234   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:04.797259   63448 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:04.797322   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:04.811794   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:04.826487   63448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:04.953417   63448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:05.102410   63448 docker.go:233] disabling docker service ...
	I0914 18:09:05.102491   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:05.117443   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:05.131147   63448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:05.278483   63448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.401195   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:05.415794   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:05.434594   63448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:05.434660   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.445566   63448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:05.445643   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.456690   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.468044   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.479719   63448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:05.491019   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.501739   63448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.520582   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.531469   63448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:05.541741   63448 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:05.541809   63448 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:05.561648   63448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:05.571882   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:05.706592   63448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:05.811522   63448 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:05.811599   63448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:05.816676   63448 start.go:563] Will wait 60s for crictl version
	I0914 18:09:05.816745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:09:05.820367   63448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:05.862564   63448 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:05.862637   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.893106   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.927136   63448 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:04.492847   62207 main.go:141] libmachine: (no-preload-168587) Calling .Start
	I0914 18:09:04.493070   62207 main.go:141] libmachine: (no-preload-168587) Ensuring networks are active...
	I0914 18:09:04.493844   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network default is active
	I0914 18:09:04.494193   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network mk-no-preload-168587 is active
	I0914 18:09:04.494614   62207 main.go:141] libmachine: (no-preload-168587) Getting domain xml...
	I0914 18:09:04.495434   62207 main.go:141] libmachine: (no-preload-168587) Creating domain...
	I0914 18:09:05.801470   62207 main.go:141] libmachine: (no-preload-168587) Waiting to get IP...
	I0914 18:09:05.802621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:05.803215   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:05.803351   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:05.803229   64231 retry.go:31] will retry after 206.528002ms: waiting for machine to come up
	I0914 18:09:06.011556   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.012027   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.012063   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.011977   64231 retry.go:31] will retry after 252.283679ms: waiting for machine to come up
	I0914 18:09:06.266621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.267145   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.267178   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.267093   64231 retry.go:31] will retry after 376.426781ms: waiting for machine to come up
	I0914 18:09:06.644639   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.645212   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.645245   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.645161   64231 retry.go:31] will retry after 518.904946ms: waiting for machine to come up
	I0914 18:09:06.584604   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:09.085179   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:05.928171   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:05.931131   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931584   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:05.931662   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931826   63448 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:05.935729   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:05.947741   63448 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:05.947872   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:05.947935   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:05.984371   63448 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:05.984473   63448 ssh_runner.go:195] Run: which lz4
	I0914 18:09:05.988311   63448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:09:05.992088   63448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:09:05.992123   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:09:07.311157   63448 crio.go:462] duration metric: took 1.322885925s to copy over tarball
	I0914 18:09:07.311297   63448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:09:09.472639   63448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161311106s)
	I0914 18:09:09.472663   63448 crio.go:469] duration metric: took 2.161473132s to extract the tarball
	I0914 18:09:09.472670   63448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:09:09.508740   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:09.554508   63448 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:09:09.554533   63448 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:09:09.554548   63448 kubeadm.go:934] updating node { 192.168.61.38 8444 v1.31.1 crio true true} ...
	I0914 18:09:09.554657   63448 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-243449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:09.554722   63448 ssh_runner.go:195] Run: crio config
	I0914 18:09:09.603693   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:09.603715   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:09.603727   63448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:09.603745   63448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-243449 NodeName:default-k8s-diff-port-243449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:09.603879   63448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-243449"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:09.603935   63448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:09.613786   63448 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:09.613863   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:09.623172   63448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0914 18:09:09.641437   63448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:09.657677   63448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:09:09.675042   63448 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:09.678885   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:09.694466   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:09.823504   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:09.840638   63448 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449 for IP: 192.168.61.38
	I0914 18:09:09.840658   63448 certs.go:194] generating shared ca certs ...
	I0914 18:09:09.840677   63448 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:09.840827   63448 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:09.840869   63448 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:09.840879   63448 certs.go:256] generating profile certs ...
	I0914 18:09:09.841046   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/client.key
	I0914 18:09:09.841147   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key.68770133
	I0914 18:09:09.841231   63448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key
	I0914 18:09:09.841342   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:09.841370   63448 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:09.841377   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:09.841398   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:09.841425   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:09.841447   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:09.841499   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:09.842211   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:09.883406   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:09.914134   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:09.941343   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:09.990870   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 18:09:10.040821   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:10.065238   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:10.089901   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:09:10.114440   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:10.138963   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:10.162828   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:10.185702   63448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:10.201251   63448 ssh_runner.go:195] Run: openssl version
	I0914 18:09:10.206904   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:10.216966   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221437   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221506   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.227033   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:10.237039   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:10.247244   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251434   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251494   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.257187   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:10.267490   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:10.277622   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281705   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281789   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.287013   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:10.296942   63448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.165576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.166187   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.166219   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.166125   64231 retry.go:31] will retry after 631.376012ms: waiting for machine to come up
	I0914 18:09:07.798978   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.799450   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.799478   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.799404   64231 retry.go:31] will retry after 668.764795ms: waiting for machine to come up
	I0914 18:09:08.470207   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:08.470613   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:08.470640   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:08.470559   64231 retry.go:31] will retry after 943.595216ms: waiting for machine to come up
	I0914 18:09:09.415274   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:09.415721   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:09.415751   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:09.415675   64231 retry.go:31] will retry after 956.638818ms: waiting for machine to come up
	I0914 18:09:10.374297   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:10.374875   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:10.374902   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:10.374822   64231 retry.go:31] will retry after 1.703915418s: waiting for machine to come up
	I0914 18:09:11.583370   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:14.082919   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:10.301352   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:10.307276   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:10.313391   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:10.319883   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:10.325671   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:10.331445   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:10.336855   63448 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:10.336953   63448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:10.337019   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.372899   63448 cri.go:89] found id: ""
	I0914 18:09:10.372988   63448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:10.386897   63448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:10.386920   63448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:10.386978   63448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:10.399165   63448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:10.400212   63448 kubeconfig.go:125] found "default-k8s-diff-port-243449" server: "https://192.168.61.38:8444"
	I0914 18:09:10.402449   63448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:10.414129   63448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.38
	I0914 18:09:10.414192   63448 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:10.414207   63448 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:10.414276   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.454549   63448 cri.go:89] found id: ""
	I0914 18:09:10.454627   63448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:10.472261   63448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:10.481693   63448 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:10.481724   63448 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:10.481772   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 18:09:10.492205   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:10.492283   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:10.502923   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 18:09:10.511620   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:10.511688   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:10.520978   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.529590   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:10.529652   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.538602   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 18:09:10.546968   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:10.547037   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:10.556280   63448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:10.565471   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:10.670297   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.611646   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.858308   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.942761   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:12.018144   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:12.018251   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.518933   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.019098   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.518297   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.018327   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.033874   63448 api_server.go:72] duration metric: took 2.015718891s to wait for apiserver process to appear ...
	I0914 18:09:14.033902   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:14.033926   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:14.034534   63448 api_server.go:269] stopped: https://192.168.61.38:8444/healthz: Get "https://192.168.61.38:8444/healthz": dial tcp 192.168.61.38:8444: connect: connection refused
	I0914 18:09:14.534065   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.080547   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:12.081149   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:12.081174   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:12.081095   64231 retry.go:31] will retry after 1.634645735s: waiting for machine to come up
	I0914 18:09:13.717239   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:13.717787   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:13.717821   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:13.717731   64231 retry.go:31] will retry after 2.524549426s: waiting for machine to come up
	I0914 18:09:16.244729   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:16.245132   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:16.245162   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:16.245072   64231 retry.go:31] will retry after 2.539965892s: waiting for machine to come up
	I0914 18:09:16.083603   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:18.581965   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:16.427071   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.427109   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.427156   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.440812   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.440848   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.534060   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.593356   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:16.593412   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.034545   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.039094   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.039131   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.534668   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.543018   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.543053   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.034612   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.039042   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.039071   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.534675   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.540612   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.540637   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.034196   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.040397   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.040429   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.535035   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.540910   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.540940   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:20.034275   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:20.038541   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:09:20.044704   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:20.044734   63448 api_server.go:131] duration metric: took 6.010822563s to wait for apiserver health ...
	I0914 18:09:20.044744   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:20.044752   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:20.046616   63448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:20.047724   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:20.058152   63448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:20.077880   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:20.090089   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:20.090135   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:20.090148   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:20.090178   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:20.090192   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:20.090199   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:09:20.090210   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:20.090219   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:20.090226   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:09:20.090236   63448 system_pods.go:74] duration metric: took 12.327834ms to wait for pod list to return data ...
	I0914 18:09:20.090248   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:20.094429   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:20.094455   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:20.094468   63448 node_conditions.go:105] duration metric: took 4.21448ms to run NodePressure ...
	I0914 18:09:20.094486   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.357111   63448 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361447   63448 kubeadm.go:739] kubelet initialised
	I0914 18:09:20.361469   63448 kubeadm.go:740] duration metric: took 4.331134ms waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361479   63448 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:20.367027   63448 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.371669   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371697   63448 pod_ready.go:82] duration metric: took 4.644689ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.371706   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371714   63448 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.376461   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376486   63448 pod_ready.go:82] duration metric: took 4.764316ms for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.376497   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376506   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.380607   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380632   63448 pod_ready.go:82] duration metric: took 4.117696ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.380642   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380649   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.481883   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481920   63448 pod_ready.go:82] duration metric: took 101.262101ms for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.481935   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481965   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.881501   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881541   63448 pod_ready.go:82] duration metric: took 399.559576ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.881556   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881566   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.282414   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282446   63448 pod_ready.go:82] duration metric: took 400.860884ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.282463   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282472   63448 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.681717   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681757   63448 pod_ready.go:82] duration metric: took 399.273892ms for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.681773   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681783   63448 pod_ready.go:39] duration metric: took 1.320292845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:21.681825   63448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:09:21.693644   63448 ops.go:34] apiserver oom_adj: -16
	I0914 18:09:21.693682   63448 kubeadm.go:597] duration metric: took 11.306754096s to restartPrimaryControlPlane
	I0914 18:09:21.693696   63448 kubeadm.go:394] duration metric: took 11.356851178s to StartCluster
	I0914 18:09:21.693719   63448 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.693820   63448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:09:21.695521   63448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.695793   63448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:09:21.695903   63448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:09:21.695982   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:21.696003   63448 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696021   63448 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696029   63448 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696041   63448 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:09:21.696044   63448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-243449"
	I0914 18:09:21.696063   63448 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696094   63448 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696108   63448 addons.go:243] addon metrics-server should already be in state true
	I0914 18:09:21.696149   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696074   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696411   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696455   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696543   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696562   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696693   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696735   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.697719   63448 out.go:177] * Verifying Kubernetes components...
	I0914 18:09:21.699171   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:21.712479   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0914 18:09:21.712563   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0914 18:09:21.713050   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713065   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713585   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713601   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713613   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713633   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713940   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714122   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.714135   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714737   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.714789   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.716503   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 18:09:21.716977   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.717490   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.717514   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.717872   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.718055   63448 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.718075   63448 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:09:21.718105   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.718432   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718484   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.718491   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718527   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.737248   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0914 18:09:21.738874   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.739437   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.739460   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.739865   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.740121   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.742251   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.744281   63448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:21.745631   63448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:21.745656   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:09:21.745682   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.749856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750398   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.750424   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.750886   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.751029   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.751187   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.756605   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0914 18:09:21.756825   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0914 18:09:21.757040   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757293   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757562   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.757588   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758058   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.758301   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.758322   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758325   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.758717   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.759300   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.759342   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.760557   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.762845   63448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:09:18.787883   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:18.788270   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:18.788298   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:18.788225   64231 retry.go:31] will retry after 4.53698887s: waiting for machine to come up
	I0914 18:09:21.764071   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:09:21.764092   63448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:09:21.764116   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.767725   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768255   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.768367   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768503   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.768681   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.768856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.769030   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.776783   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0914 18:09:21.777226   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.777736   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.777754   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.778113   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.778345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.780215   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.780421   63448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:21.780436   63448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:09:21.780458   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.783243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783671   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.783698   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783857   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.784023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.784138   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.784256   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.919649   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:21.945515   63448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:22.020487   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:09:22.020509   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:09:22.041265   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:22.072169   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:09:22.072199   63448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:09:22.112117   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.112148   63448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:09:22.146636   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:22.162248   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.520416   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520448   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.520793   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.520815   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.520831   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520833   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.520840   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.521074   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.521119   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.527992   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.528030   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.528578   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.528581   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.528605   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246463   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084175525s)
	I0914 18:09:23.246520   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246535   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246564   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099889297s)
	I0914 18:09:23.246609   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246621   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246835   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246876   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.246888   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246897   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246910   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246958   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247002   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247021   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.247046   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.247156   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.247192   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247227   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247260   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-243449"
	I0914 18:09:23.250385   63448 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 18:09:20.583600   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.083187   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.251609   63448 addons.go:510] duration metric: took 1.555716144s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 18:09:23.949715   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.327287   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327775   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has current primary IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327803   62207 main.go:141] libmachine: (no-preload-168587) Found IP for machine: 192.168.39.38
	I0914 18:09:23.327822   62207 main.go:141] libmachine: (no-preload-168587) Reserving static IP address...
	I0914 18:09:23.328197   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.328221   62207 main.go:141] libmachine: (no-preload-168587) Reserved static IP address: 192.168.39.38
	I0914 18:09:23.328264   62207 main.go:141] libmachine: (no-preload-168587) DBG | skip adding static IP to network mk-no-preload-168587 - found existing host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"}
	I0914 18:09:23.328283   62207 main.go:141] libmachine: (no-preload-168587) DBG | Getting to WaitForSSH function...
	I0914 18:09:23.328295   62207 main.go:141] libmachine: (no-preload-168587) Waiting for SSH to be available...
	I0914 18:09:23.330598   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.330954   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.330985   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.331105   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH client type: external
	I0914 18:09:23.331132   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa (-rw-------)
	I0914 18:09:23.331184   62207 main.go:141] libmachine: (no-preload-168587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:23.331196   62207 main.go:141] libmachine: (no-preload-168587) DBG | About to run SSH command:
	I0914 18:09:23.331208   62207 main.go:141] libmachine: (no-preload-168587) DBG | exit 0
	I0914 18:09:23.454525   62207 main.go:141] libmachine: (no-preload-168587) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:23.454883   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetConfigRaw
	I0914 18:09:23.455505   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.457696   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458030   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.458069   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458372   62207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:09:23.458611   62207 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:23.458633   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:23.458828   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.461199   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461540   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.461576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461705   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.461895   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462006   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462153   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.462314   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.462477   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.462488   62207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:23.566278   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:23.566310   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566559   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:09:23.566581   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566742   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.569254   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569590   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.569617   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569713   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.569888   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570009   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570174   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.570344   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.570556   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.570575   62207 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-168587 && echo "no-preload-168587" | sudo tee /etc/hostname
	I0914 18:09:23.687805   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-168587
	
	I0914 18:09:23.687848   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.690447   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.690824   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690955   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.691135   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691279   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691427   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.691590   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.691768   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.691790   62207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-168587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-168587/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-168587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:23.805502   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:23.805527   62207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:23.805545   62207 buildroot.go:174] setting up certificates
	I0914 18:09:23.805553   62207 provision.go:84] configureAuth start
	I0914 18:09:23.805561   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.805798   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.808306   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808643   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.808668   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808819   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.811055   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811374   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.811401   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811586   62207 provision.go:143] copyHostCerts
	I0914 18:09:23.811647   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:23.811657   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:23.811712   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:23.811800   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:23.811808   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:23.811829   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:23.811880   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:23.811887   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:23.811908   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:23.811956   62207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.no-preload-168587 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-168587]
	I0914 18:09:24.051868   62207 provision.go:177] copyRemoteCerts
	I0914 18:09:24.051936   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:24.051958   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.054842   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055107   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.055138   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055321   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.055514   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.055664   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.055804   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.140378   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:24.168422   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:09:24.194540   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:09:24.217910   62207 provision.go:87] duration metric: took 412.343545ms to configureAuth
	I0914 18:09:24.217942   62207 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:24.218180   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:24.218255   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.220788   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221216   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.221259   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221408   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.221678   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.221842   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.222033   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.222218   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.222399   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.222417   62207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:24.433203   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:24.433230   62207 machine.go:96] duration metric: took 974.605605ms to provisionDockerMachine
	I0914 18:09:24.433241   62207 start.go:293] postStartSetup for "no-preload-168587" (driver="kvm2")
	I0914 18:09:24.433253   62207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:24.433282   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.433595   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:24.433625   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.436247   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436710   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.436746   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436855   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.437015   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.437189   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.437305   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.516493   62207 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:24.520486   62207 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:24.520518   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:24.520612   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:24.520687   62207 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:24.520775   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:24.530274   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:24.553381   62207 start.go:296] duration metric: took 120.123302ms for postStartSetup
	I0914 18:09:24.553422   62207 fix.go:56] duration metric: took 20.086564499s for fixHost
	I0914 18:09:24.553445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.555832   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556100   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.556133   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556376   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.556605   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556772   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556922   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.557062   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.557275   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.557285   62207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:24.659101   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337364.632455119
	
	I0914 18:09:24.659128   62207 fix.go:216] guest clock: 1726337364.632455119
	I0914 18:09:24.659139   62207 fix.go:229] Guest: 2024-09-14 18:09:24.632455119 +0000 UTC Remote: 2024-09-14 18:09:24.553426386 +0000 UTC m=+357.567907862 (delta=79.028733ms)
	I0914 18:09:24.659165   62207 fix.go:200] guest clock delta is within tolerance: 79.028733ms
	I0914 18:09:24.659171   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 20.192350446s
	I0914 18:09:24.659209   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.659445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:24.662626   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663051   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.663082   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663225   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663802   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663972   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.664063   62207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:24.664114   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.664195   62207 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:24.664221   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.666971   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667255   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667398   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667433   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667555   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.667753   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.667787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667816   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667913   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.667988   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.668058   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.668109   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.668236   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.668356   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.743805   62207 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:24.776583   62207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:24.924635   62207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:24.930891   62207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:24.930979   62207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:24.952228   62207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:24.952258   62207 start.go:495] detecting cgroup driver to use...
	I0914 18:09:24.952344   62207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:24.967770   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:24.983218   62207 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:24.983280   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:24.997311   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:25.011736   62207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:25.135920   62207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:25.323727   62207 docker.go:233] disabling docker service ...
	I0914 18:09:25.323793   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:25.341243   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:25.358703   62207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:25.495826   62207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:25.621684   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:25.637386   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:25.655826   62207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:25.655947   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.669204   62207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:25.669266   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.680265   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.690860   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.702002   62207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:25.713256   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.724125   62207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.742195   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.752680   62207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:25.762842   62207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:25.762920   62207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:25.775680   62207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:25.785190   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:25.907175   62207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:25.995654   62207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:25.995731   62207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:26.000829   62207 start.go:563] Will wait 60s for crictl version
	I0914 18:09:26.000896   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.004522   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:26.041674   62207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:26.041745   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.069091   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.107475   62207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:26.108650   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:26.111782   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112110   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:26.112139   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112279   62207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:26.116339   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:26.128616   62207 kubeadm.go:883] updating cluster {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:26.128755   62207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:26.128796   62207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:26.165175   62207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:26.165197   62207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:09:26.165282   62207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.165301   62207 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 18:09:26.165302   62207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.165276   62207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.165346   62207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.165309   62207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.165443   62207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.165451   62207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.166853   62207 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 18:09:26.166858   62207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.166864   62207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.166873   62207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.166911   62207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.166928   62207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.366393   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.398127   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 18:09:26.401173   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.405861   62207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 18:09:26.405910   62207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.405982   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.410513   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.411414   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.416692   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.417710   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643066   62207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 18:09:26.643114   62207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.643177   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643195   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.643242   62207 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 18:09:26.643278   62207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 18:09:26.643293   62207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 18:09:26.643282   62207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.643307   62207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.643323   62207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.643328   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643351   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643366   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643386   62207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 18:09:26.643412   62207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643436   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.654984   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.655035   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.733881   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.733967   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.769624   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.778708   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.778836   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.778855   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.821344   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.821358   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.899012   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.906693   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.909875   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.916458   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.944355   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.949250   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 18:09:26.949400   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:25.582447   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:28.084142   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:25.949851   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:26.950390   63448 node_ready.go:49] node "default-k8s-diff-port-243449" has status "Ready":"True"
	I0914 18:09:26.950418   63448 node_ready.go:38] duration metric: took 5.004868966s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:26.950430   63448 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:26.956875   63448 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963909   63448 pod_ready.go:93] pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:26.963935   63448 pod_ready.go:82] duration metric: took 7.027533ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963945   63448 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971297   63448 pod_ready.go:93] pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.971327   63448 pod_ready.go:82] duration metric: took 2.007374825s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971340   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977510   63448 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.977535   63448 pod_ready.go:82] duration metric: took 6.18573ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977557   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.035840   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 18:09:27.035956   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:27.040828   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 18:09:27.040939   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 18:09:27.040941   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:27.041026   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:27.048278   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 18:09:27.048345   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 18:09:27.048388   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:27.048390   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 18:09:27.048446   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048423   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 18:09:27.048482   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048431   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:27.052221   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 18:09:27.052401   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 18:09:27.052585   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 18:09:27.330779   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.721998   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.673483443s)
	I0914 18:09:29.722035   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 18:09:29.722064   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722076   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.673496811s)
	I0914 18:09:29.722112   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 18:09:29.722112   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722194   62207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.391387893s)
	I0914 18:09:29.722236   62207 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 18:09:29.722257   62207 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.722297   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:31.485714   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.76356866s)
	I0914 18:09:31.485744   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 18:09:31.485764   62207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485817   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485820   62207 ssh_runner.go:235] Completed: which crictl: (1.763506603s)
	I0914 18:09:31.485862   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:30.583013   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:33.083597   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.985230   63448 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:31.984182   63448 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.984203   63448 pod_ready.go:82] duration metric: took 3.006637599s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.984212   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989786   63448 pod_ready.go:93] pod "kube-proxy-gbkqm" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.989812   63448 pod_ready.go:82] duration metric: took 5.592466ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989823   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994224   63448 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.994246   63448 pod_ready.go:82] duration metric: took 4.414059ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994258   63448 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:34.001035   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.781678   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.295763296s)
	I0914 18:09:34.781783   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:34.781814   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.295968995s)
	I0914 18:09:34.781840   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 18:09:34.781868   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:34.781900   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:36.744459   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.962646981s)
	I0914 18:09:36.744514   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.962587733s)
	I0914 18:09:36.744551   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 18:09:36.744576   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:36.744590   62207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:36.744658   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:35.582596   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.083260   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:36.002284   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.002962   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.848091   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.103407014s)
	I0914 18:09:38.848126   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 18:09:38.848152   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848217   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848153   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.103554199s)
	I0914 18:09:38.848283   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:09:38.848368   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307247   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.459002378s)
	I0914 18:09:40.307287   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 18:09:40.307269   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458886581s)
	I0914 18:09:40.307327   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 18:09:40.307334   62207 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307382   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.958177   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 18:09:40.958222   62207 cache_images.go:123] Successfully loaded all cached images
	I0914 18:09:40.958228   62207 cache_images.go:92] duration metric: took 14.793018447s to LoadCachedImages
	I0914 18:09:40.958241   62207 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.1 crio true true} ...
	I0914 18:09:40.958347   62207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-168587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:40.958415   62207 ssh_runner.go:195] Run: crio config
	I0914 18:09:41.003620   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:41.003643   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:41.003653   62207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:41.003674   62207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-168587 NodeName:no-preload-168587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:41.003850   62207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-168587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:41.003920   62207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:41.014462   62207 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:41.014541   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:41.023964   62207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 18:09:41.040206   62207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:41.055630   62207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 18:09:41.072881   62207 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:41.076449   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:41.090075   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:41.210405   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:41.228173   62207 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587 for IP: 192.168.39.38
	I0914 18:09:41.228197   62207 certs.go:194] generating shared ca certs ...
	I0914 18:09:41.228213   62207 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:41.228383   62207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:41.228443   62207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:41.228457   62207 certs.go:256] generating profile certs ...
	I0914 18:09:41.228586   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.key
	I0914 18:09:41.228667   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key.d11ec263
	I0914 18:09:41.228731   62207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key
	I0914 18:09:41.228889   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:41.228932   62207 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:41.228944   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:41.228976   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:41.229008   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:41.229045   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:41.229102   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:41.229913   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:41.259871   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:41.286359   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:41.315410   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:41.345541   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:09:41.380128   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:41.411130   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:41.442136   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:09:41.464823   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:41.488153   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:41.513788   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:41.537256   62207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:41.553550   62207 ssh_runner.go:195] Run: openssl version
	I0914 18:09:41.559366   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:41.571498   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576889   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576947   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.583651   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:41.594743   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:41.605811   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610034   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610103   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.615810   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:41.627145   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:41.639956   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644647   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644705   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.650281   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:41.662354   62207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:41.667150   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:41.673263   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:41.680660   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:41.687283   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:41.693256   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:41.698969   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:41.704543   62207 kubeadm.go:392] StartCluster: {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:41.704671   62207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:41.704750   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.741255   62207 cri.go:89] found id: ""
	I0914 18:09:41.741354   62207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:41.751360   62207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:41.751377   62207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:41.751417   62207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:41.761492   62207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:41.762591   62207 kubeconfig.go:125] found "no-preload-168587" server: "https://192.168.39.38:8443"
	I0914 18:09:41.764876   62207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:41.774868   62207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0914 18:09:41.774901   62207 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:41.774913   62207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:41.774969   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.810189   62207 cri.go:89] found id: ""
	I0914 18:09:41.810248   62207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:41.827903   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:41.837504   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:41.837532   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:41.837585   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:09:41.846260   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:41.846322   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:41.855350   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:09:41.864096   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:41.864153   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:41.874772   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.885427   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:41.885502   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.897121   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:09:41.906955   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:41.907020   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:41.918253   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:41.930134   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:40.084800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:42.581757   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:44.583611   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.502272   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:43.001471   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.054830   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.754174   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.973037   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.043041   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.119704   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:43.119805   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.620541   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.120849   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.139382   62207 api_server.go:72] duration metric: took 1.019679094s to wait for apiserver process to appear ...
	I0914 18:09:44.139406   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:44.139424   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:44.139876   62207 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0914 18:09:44.639981   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.262096   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.262132   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.262151   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.280626   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.280652   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.640152   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.646640   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:47.646676   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.140256   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.145520   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:48.145557   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.640147   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.645032   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:09:48.654567   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:48.654600   62207 api_server.go:131] duration metric: took 4.515188826s to wait for apiserver health ...
	I0914 18:09:48.654609   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:48.654615   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:48.656828   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:47.082431   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:49.582001   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.500938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:48.002332   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.658151   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:48.692232   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:48.734461   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:48.746689   62207 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:48.746723   62207 system_pods.go:61] "coredns-7c65d6cfc9-mwhvh" [38800077-a7ff-4c8c-8375-4efac2ae40b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:48.746733   62207 system_pods.go:61] "etcd-no-preload-168587" [bdb166bb-8c07-448c-a97c-2146e84f139b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:48.746744   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [8ad59d56-cb86-4028-bf16-3733eb32ad8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:48.746752   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [fd66d0aa-cc35-4330-aa6b-571dbeaa6490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:48.746761   62207 system_pods.go:61] "kube-proxy-lvp9h" [75c154d8-c76d-49eb-9497-dd17199e9d20] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:09:48.746771   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [858c948b-9025-48ab-907a-5b69aefbb24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:48.746782   62207 system_pods.go:61] "metrics-server-6867b74b74-n276z" [69e25ed4-dc8e-4c68-955e-e7226d066ac4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:48.746790   62207 system_pods.go:61] "storage-provisioner" [41c92694-2d3a-4025-8e28-ddea7b9b9c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:09:48.746801   62207 system_pods.go:74] duration metric: took 12.315296ms to wait for pod list to return data ...
	I0914 18:09:48.746811   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:48.751399   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:48.751428   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:48.751440   62207 node_conditions.go:105] duration metric: took 4.625335ms to run NodePressure ...
	I0914 18:09:48.751461   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:49.051211   62207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057333   62207 kubeadm.go:739] kubelet initialised
	I0914 18:09:49.057366   62207 kubeadm.go:740] duration metric: took 6.124032ms waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057379   62207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:49.062570   62207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:51.069219   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:51.588043   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:54.082931   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.499759   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:52.502450   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.000767   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.069338   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:53.570290   62207 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:53.570323   62207 pod_ready.go:82] duration metric: took 4.507716999s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:53.570333   62207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:55.577317   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:56.581937   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:58.583632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:57.000913   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.001429   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:57.578803   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.576525   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:59.576547   62207 pod_ready.go:82] duration metric: took 6.006207927s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:59.576556   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084027   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.084054   62207 pod_ready.go:82] duration metric: took 507.490867ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084067   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089044   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.089068   62207 pod_ready.go:82] duration metric: took 4.991847ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089079   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093160   62207 pod_ready.go:93] pod "kube-proxy-lvp9h" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.093179   62207 pod_ready.go:82] duration metric: took 4.093257ms for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093198   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096786   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.096800   62207 pod_ready.go:82] duration metric: took 3.594996ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096807   62207 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:01.082601   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:03.581290   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:01.502864   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.001645   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.103118   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.103451   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.603049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.582900   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.083020   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.500670   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.500752   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:08.604603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.103035   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:10.581739   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:12.582523   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:14.582676   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.000857   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:13.000915   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.001474   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:13.103818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.602921   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.082737   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:19.082771   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.501947   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.000464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:18.102598   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.104053   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:21.085272   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:23.583185   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:22.001396   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.500424   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:22.602815   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.603230   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.604416   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.082291   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:28.582007   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.501088   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:29.001400   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:28.604725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.103833   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:30.582800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.082884   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.001447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.501525   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:33.603121   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.604573   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.083689   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:37.583721   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:36.000829   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:38.001022   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.002742   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:38.102910   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.103826   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.082907   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.083587   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:44.582457   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.500851   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.001069   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.603017   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.104603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.082010   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:49.082648   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.500917   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.001192   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:47.603610   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.104612   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.083246   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:53.582120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:52.001273   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:54.001308   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:52.603604   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.104734   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.582258   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.081905   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:56.002210   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.499961   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.604025   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.104193   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.083208   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.582299   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.583803   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.500596   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.501405   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.501527   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:02.602818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:05.103061   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:07.083161   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.585702   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.999885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.001611   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:07.603634   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.605513   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.082145   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.082616   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:11.500951   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.001238   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:12.103165   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.103338   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.103440   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.083173   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.582225   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.001344   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.501001   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:18.603529   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.607347   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.583176   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.082414   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.501464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.000161   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.000597   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:23.103266   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.104091   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.082804   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.582345   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.582482   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.001813   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.500751   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.105655   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.602893   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:32.082229   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.082649   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:31.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.001997   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:32.103194   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.103615   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.603139   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.083120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.582197   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.500663   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.501005   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:38.603823   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:41.102313   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.583132   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.082387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.501996   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.000447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:43.103756   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.082762   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.582353   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:49.584026   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.500451   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.501831   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.001778   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:47.605117   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.103023   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.082751   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.582016   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.501977   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.000564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.603110   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.603267   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:56.582121   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:58.583277   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:57.001624   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.501328   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:57.102934   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.103345   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.604125   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.083045   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:03.582885   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.501890   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:04.001023   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.103388   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.103725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.083003   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.581415   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.002052   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.500393   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:08.602880   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.604231   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.582647   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.082818   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.500953   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.001310   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.103254   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.103641   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.083238   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.582387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.501029   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.505028   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.001929   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:17.103996   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:19.603109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:21.603150   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.083547   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.586009   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.002367   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:24.500394   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:23.603351   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:26.103668   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.082824   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.582534   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.001617   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:29.002224   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:28.104044   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.602717   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.083027   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:32.083454   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:34.582698   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.002459   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:33.500924   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.603673   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.103528   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:37.081632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.082269   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.501391   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:38.000592   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:40.001479   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:37.602537   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.603422   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.082861   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:43.583680   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:42.002259   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.500927   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:42.103521   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.603744   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:46.082955   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:48.576914   62554 pod_ready.go:82] duration metric: took 4m0.000963266s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	E0914 18:12:48.576953   62554 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:12:48.576972   62554 pod_ready.go:39] duration metric: took 4m11.061091965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:12:48.576996   62554 kubeadm.go:597] duration metric: took 4m18.578277603s to restartPrimaryControlPlane
	W0914 18:12:48.577052   62554 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:48.577082   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:46.501278   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.001649   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.103834   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.104052   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.603479   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.500469   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:53.500885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.604109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.104639   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:55.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:57.501413   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:00.000020   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:12:58.604461   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:01.102988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:02.001195   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:04.001938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:03.603700   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.103294   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.501564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:09.002049   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:08.604408   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:11.103401   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:14.788734   62554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.2116254s)
	I0914 18:13:14.788816   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:14.810488   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:13:14.827773   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:13:14.846933   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:13:14.846958   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:13:14.847011   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:13:14.859886   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:13:14.859954   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:13:14.882400   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:13:14.896700   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:13:14.896779   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:13:14.908567   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.920718   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:13:14.920791   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.930849   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:13:14.940757   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:13:14.940829   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:13:14.950828   62554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:13:15.000219   62554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:13:15.000292   62554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:13:15.116662   62554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:13:15.116830   62554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:13:15.116937   62554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:13:15.128493   62554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:13:11.002219   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:13.500397   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.130231   62554 out.go:235]   - Generating certificates and keys ...
	I0914 18:13:15.130322   62554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:13:15.130412   62554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:13:15.130513   62554 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:13:15.130642   62554 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:13:15.130762   62554 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:13:15.130842   62554 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:13:15.130927   62554 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:13:15.131020   62554 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:13:15.131131   62554 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:13:15.131235   62554 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:13:15.131325   62554 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:13:15.131417   62554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:13:15.454691   62554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:13:15.653046   62554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:13:15.704029   62554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:13:15.846280   62554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:13:15.926881   62554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:13:15.927633   62554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:13:15.932596   62554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:13:13.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.603335   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.934499   62554 out.go:235]   - Booting up control plane ...
	I0914 18:13:15.934626   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:13:15.934761   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:13:15.934913   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:13:15.952982   62554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:13:15.961449   62554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:13:15.961526   62554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:13:16.102126   62554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:13:16.102335   62554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:13:16.604217   62554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.082287ms
	I0914 18:13:16.604330   62554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:13:15.501231   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:17.501427   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:19.501641   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.609408   62554 kubeadm.go:310] [api-check] The API server is healthy after 5.002255971s
	I0914 18:13:21.622798   62554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:13:21.637103   62554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:13:21.676498   62554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:13:21.676739   62554 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-044534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:13:21.697522   62554 kubeadm.go:310] [bootstrap-token] Using token: oo4rrp.xx4py1wjxiu1i6la
	I0914 18:13:17.604060   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:20.103115   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.699311   62554 out.go:235]   - Configuring RBAC rules ...
	I0914 18:13:21.699462   62554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:13:21.711614   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:13:21.721449   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:13:21.727812   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:13:21.733486   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:13:21.747521   62554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:13:22.014670   62554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:13:22.463865   62554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:13:23.016165   62554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:13:23.016195   62554 kubeadm.go:310] 
	I0914 18:13:23.016257   62554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:13:23.016265   62554 kubeadm.go:310] 
	I0914 18:13:23.016385   62554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:13:23.016415   62554 kubeadm.go:310] 
	I0914 18:13:23.016456   62554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:13:23.016542   62554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:13:23.016627   62554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:13:23.016637   62554 kubeadm.go:310] 
	I0914 18:13:23.016753   62554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:13:23.016778   62554 kubeadm.go:310] 
	I0914 18:13:23.016850   62554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:13:23.016860   62554 kubeadm.go:310] 
	I0914 18:13:23.016937   62554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:13:23.017051   62554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:13:23.017142   62554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:13:23.017156   62554 kubeadm.go:310] 
	I0914 18:13:23.017284   62554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:13:23.017403   62554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:13:23.017419   62554 kubeadm.go:310] 
	I0914 18:13:23.017533   62554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.017664   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:13:23.017700   62554 kubeadm.go:310] 	--control-plane 
	I0914 18:13:23.017710   62554 kubeadm.go:310] 
	I0914 18:13:23.017821   62554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:13:23.017832   62554 kubeadm.go:310] 
	I0914 18:13:23.017944   62554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.018104   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:13:23.019098   62554 kubeadm.go:310] W0914 18:13:14.968906    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019512   62554 kubeadm.go:310] W0914 18:13:14.970621    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019672   62554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:13:23.019690   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:13:23.019704   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:13:23.021459   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:13:23.022517   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:13:23.037352   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:13:23.062037   62554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:13:23.062132   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.062202   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044534 minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=embed-certs-044534 minikube.k8s.io/primary=true
	I0914 18:13:23.089789   62554 ops.go:34] apiserver oom_adj: -16
	I0914 18:13:23.246478   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.747419   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.247388   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.746913   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:21.502222   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.001757   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:25.247445   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:25.747417   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.247440   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.747262   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.847454   62554 kubeadm.go:1113] duration metric: took 3.78538549s to wait for elevateKubeSystemPrivileges
	I0914 18:13:26.847496   62554 kubeadm.go:394] duration metric: took 4m56.896825398s to StartCluster
	I0914 18:13:26.847521   62554 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.847618   62554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:13:26.850148   62554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.850488   62554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:13:26.850562   62554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:13:26.850672   62554 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-044534"
	I0914 18:13:26.850690   62554 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-044534"
	W0914 18:13:26.850703   62554 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:13:26.850715   62554 addons.go:69] Setting default-storageclass=true in profile "embed-certs-044534"
	I0914 18:13:26.850734   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.850753   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:13:26.850752   62554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044534"
	I0914 18:13:26.850716   62554 addons.go:69] Setting metrics-server=true in profile "embed-certs-044534"
	I0914 18:13:26.850844   62554 addons.go:234] Setting addon metrics-server=true in "embed-certs-044534"
	W0914 18:13:26.850860   62554 addons.go:243] addon metrics-server should already be in state true
	I0914 18:13:26.850898   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.851174   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851204   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851214   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851235   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851250   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851273   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.852030   62554 out.go:177] * Verifying Kubernetes components...
	I0914 18:13:26.853580   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:13:26.868084   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0914 18:13:26.868135   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0914 18:13:26.868700   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.868787   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.869251   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869282   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.869637   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.869650   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869714   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.870039   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.870232   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.870396   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.870454   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.871718   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0914 18:13:26.872337   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.872842   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.872870   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.873227   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.873942   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.873989   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.874235   62554 addons.go:234] Setting addon default-storageclass=true in "embed-certs-044534"
	W0914 18:13:26.874257   62554 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:13:26.874287   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.874674   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.874721   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.887685   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0914 18:13:26.888211   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.888735   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.888753   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.889060   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.889233   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.891040   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.892012   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0914 18:13:26.892352   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.892798   62554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:13:26.892812   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.892845   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.893321   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.893987   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.894040   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.894059   62554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:26.894078   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:13:26.894102   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.897218   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0914 18:13:26.897776   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.897932   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.898631   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.898669   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.899315   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.899382   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.899395   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.899557   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.899698   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.899873   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.900433   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.900668   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.902863   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.904569   62554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:13:22.104620   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.603793   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.604247   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.905708   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:13:26.905729   62554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:13:26.905755   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.910848   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911333   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.911430   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911568   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.911840   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.912025   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.912238   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.912625   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0914 18:13:26.913014   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.913653   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.913668   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.914116   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.914342   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.916119   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.916332   62554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:26.916350   62554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:13:26.916369   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.920129   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920769   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.920791   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920971   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.921170   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.921291   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.921413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:27.055184   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:13:27.072683   62554 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084289   62554 node_ready.go:49] node "embed-certs-044534" has status "Ready":"True"
	I0914 18:13:27.084317   62554 node_ready.go:38] duration metric: took 11.599354ms for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084326   62554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:27.090428   62554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:27.258854   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:27.260576   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:27.261092   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:13:27.261115   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:13:27.332882   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:13:27.332914   62554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:13:27.400159   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:27.400193   62554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:13:27.486731   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:28.164139   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164171   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164215   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164242   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164581   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164593   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164596   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164597   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164608   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164569   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164619   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164621   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164627   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164629   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164874   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164897   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164902   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164929   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164941   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196171   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.196197   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.196530   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.196590   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.509915   62554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023114908s)
	I0914 18:13:28.509973   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.509989   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510276   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510329   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510348   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510365   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.510374   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510614   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510653   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510665   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510678   62554 addons.go:475] Verifying addon metrics-server=true in "embed-certs-044534"
	I0914 18:13:28.512283   62554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:13:28.513593   62554 addons.go:510] duration metric: took 1.663035459s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:13:29.103964   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.501135   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.502181   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.605176   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.102817   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.596452   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:33.596699   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.001070   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:32.001946   63448 pod_ready.go:82] duration metric: took 4m0.00767403s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:13:32.001975   63448 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:13:32.001987   63448 pod_ready.go:39] duration metric: took 4m5.051544016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:32.002004   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:32.002037   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:32.002093   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:32.053241   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.053276   63448 cri.go:89] found id: ""
	I0914 18:13:32.053287   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:32.053349   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.057854   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:32.057921   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:32.099294   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:32.099318   63448 cri.go:89] found id: ""
	I0914 18:13:32.099328   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:32.099375   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.103674   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:32.103745   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:32.144190   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:32.144219   63448 cri.go:89] found id: ""
	I0914 18:13:32.144228   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:32.144275   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.148382   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:32.148443   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:32.185779   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:32.185807   63448 cri.go:89] found id: ""
	I0914 18:13:32.185814   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:32.185864   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.189478   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:32.189545   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:32.224657   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.224681   63448 cri.go:89] found id: ""
	I0914 18:13:32.224690   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:32.224745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.228421   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:32.228494   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:32.262491   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:32.262513   63448 cri.go:89] found id: ""
	I0914 18:13:32.262519   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:32.262579   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.266135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:32.266213   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:32.300085   63448 cri.go:89] found id: ""
	I0914 18:13:32.300111   63448 logs.go:276] 0 containers: []
	W0914 18:13:32.300119   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:32.300124   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:32.300181   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:32.335359   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:32.335379   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.335387   63448 cri.go:89] found id: ""
	I0914 18:13:32.335393   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:32.335451   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.339404   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.343173   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:32.343203   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.378987   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:32.379016   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.418829   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:32.418855   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:32.941046   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:32.941102   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.998148   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:32.998209   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:33.041208   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:33.041241   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:33.080774   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:33.080806   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:33.130519   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:33.130552   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:33.182751   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:33.182788   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:33.222008   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:33.222053   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:33.263100   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:33.263137   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:33.330307   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:33.330343   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:33.344658   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:33.344687   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:35.597157   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:35.597179   62554 pod_ready.go:82] duration metric: took 8.50672651s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:35.597189   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604147   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.604179   62554 pod_ready.go:82] duration metric: took 1.006982094s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604192   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610278   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.610302   62554 pod_ready.go:82] duration metric: took 6.101843ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610315   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615527   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.615549   62554 pod_ready.go:82] duration metric: took 5.226206ms for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615559   62554 pod_ready.go:39] duration metric: took 9.531222215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:36.615587   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:36.615642   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.630381   62554 api_server.go:72] duration metric: took 9.779851335s to wait for apiserver process to appear ...
	I0914 18:13:36.630414   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.630438   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:13:36.637559   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:13:36.639973   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:36.639999   62554 api_server.go:131] duration metric: took 9.577574ms to wait for apiserver health ...
	I0914 18:13:36.640006   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:36.647412   62554 system_pods.go:59] 9 kube-system pods found
	I0914 18:13:36.647443   62554 system_pods.go:61] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.647448   62554 system_pods.go:61] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.647452   62554 system_pods.go:61] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.647456   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.647459   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.647463   62554 system_pods.go:61] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.647465   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.647471   62554 system_pods.go:61] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.647475   62554 system_pods.go:61] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.647483   62554 system_pods.go:74] duration metric: took 7.47115ms to wait for pod list to return data ...
	I0914 18:13:36.647490   62554 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:36.650678   62554 default_sa.go:45] found service account: "default"
	I0914 18:13:36.650722   62554 default_sa.go:55] duration metric: took 3.225438ms for default service account to be created ...
	I0914 18:13:36.650733   62554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:36.656461   62554 system_pods.go:86] 9 kube-system pods found
	I0914 18:13:36.656489   62554 system_pods.go:89] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.656495   62554 system_pods.go:89] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.656499   62554 system_pods.go:89] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.656503   62554 system_pods.go:89] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.656507   62554 system_pods.go:89] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.656512   62554 system_pods.go:89] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.656516   62554 system_pods.go:89] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.656522   62554 system_pods.go:89] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.656525   62554 system_pods.go:89] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.656534   62554 system_pods.go:126] duration metric: took 5.795433ms to wait for k8s-apps to be running ...
	I0914 18:13:36.656541   62554 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:36.656586   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:36.673166   62554 system_svc.go:56] duration metric: took 16.609444ms WaitForService to wait for kubelet
	I0914 18:13:36.673205   62554 kubeadm.go:582] duration metric: took 9.822681909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:36.673227   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:36.794984   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:36.795013   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:36.795024   62554 node_conditions.go:105] duration metric: took 121.79122ms to run NodePressure ...
	I0914 18:13:36.795038   62554 start.go:241] waiting for startup goroutines ...
	I0914 18:13:36.795047   62554 start.go:246] waiting for cluster config update ...
	I0914 18:13:36.795060   62554 start.go:255] writing updated cluster config ...
	I0914 18:13:36.795406   62554 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:36.847454   62554 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:36.849605   62554 out.go:177] * Done! kubectl is now configured to use "embed-certs-044534" cluster and "default" namespace by default
	I0914 18:13:33.105197   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.604458   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.989800   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.006371   63448 api_server.go:72] duration metric: took 4m14.310539233s to wait for apiserver process to appear ...
	I0914 18:13:36.006405   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.006446   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:36.006508   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:36.044973   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:36.044992   63448 cri.go:89] found id: ""
	I0914 18:13:36.045000   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:36.045055   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.049371   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:36.049449   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:36.097114   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.097139   63448 cri.go:89] found id: ""
	I0914 18:13:36.097148   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:36.097212   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.102084   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:36.102153   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:36.140640   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.140662   63448 cri.go:89] found id: ""
	I0914 18:13:36.140671   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:36.140728   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.144624   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:36.144696   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:36.179135   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.179156   63448 cri.go:89] found id: ""
	I0914 18:13:36.179163   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:36.179216   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.183050   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:36.183110   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:36.222739   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:36.222758   63448 cri.go:89] found id: ""
	I0914 18:13:36.222765   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:36.222812   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.226715   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:36.226782   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:36.261587   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:36.261610   63448 cri.go:89] found id: ""
	I0914 18:13:36.261617   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:36.261664   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.265541   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:36.265614   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:36.301521   63448 cri.go:89] found id: ""
	I0914 18:13:36.301546   63448 logs.go:276] 0 containers: []
	W0914 18:13:36.301554   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:36.301560   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:36.301622   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:36.335332   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.335355   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.335358   63448 cri.go:89] found id: ""
	I0914 18:13:36.335365   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:36.335415   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.339542   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.343543   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:36.343570   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.384224   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:36.384259   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.428010   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:36.428041   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.469679   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:36.469708   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.507570   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:36.507597   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.543300   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:36.543335   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:36.619060   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:36.619084   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:36.633542   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:36.633572   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:36.741334   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:36.741370   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:37.231208   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:37.231255   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:37.278835   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:37.278863   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:37.320359   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:37.320399   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:37.357940   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:37.357974   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:39.913586   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:13:39.917590   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:13:39.918633   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:39.918653   63448 api_server.go:131] duration metric: took 3.912241678s to wait for apiserver health ...
	I0914 18:13:39.918660   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:39.918682   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:39.918727   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:39.961919   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:39.961947   63448 cri.go:89] found id: ""
	I0914 18:13:39.961956   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:39.962012   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:39.965756   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:39.965838   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:40.008044   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.008066   63448 cri.go:89] found id: ""
	I0914 18:13:40.008074   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:40.008117   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.012505   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:40.012569   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:40.059166   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.059194   63448 cri.go:89] found id: ""
	I0914 18:13:40.059204   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:40.059267   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.063135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:40.063197   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:40.105220   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.105245   63448 cri.go:89] found id: ""
	I0914 18:13:40.105255   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:40.105308   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.109907   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:40.109978   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:40.146307   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.146337   63448 cri.go:89] found id: ""
	I0914 18:13:40.146349   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:40.146396   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.150369   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:40.150436   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:40.185274   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.185301   63448 cri.go:89] found id: ""
	I0914 18:13:40.185312   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:40.185374   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.189425   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:40.189499   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:40.223289   63448 cri.go:89] found id: ""
	I0914 18:13:40.223311   63448 logs.go:276] 0 containers: []
	W0914 18:13:40.223319   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:40.223324   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:40.223369   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:40.257779   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.257805   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.257811   63448 cri.go:89] found id: ""
	I0914 18:13:40.257820   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:40.257880   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.262388   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.266233   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:40.266258   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:38.105234   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.604049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.310145   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:40.310188   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.358651   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:40.358686   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.398107   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:40.398144   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.450540   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:40.450573   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:40.465987   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:40.466013   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:40.573299   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:40.573333   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.618201   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:40.618247   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.671259   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:40.671304   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.708455   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:40.708488   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.746662   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:40.746696   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:41.108968   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:41.109017   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:41.150925   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:41.150968   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:43.725606   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:13:43.725642   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.725650   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.725656   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.725661   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.725665   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.725670   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.725680   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.725687   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.725699   63448 system_pods.go:74] duration metric: took 3.807031642s to wait for pod list to return data ...
	I0914 18:13:43.725710   63448 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:43.728384   63448 default_sa.go:45] found service account: "default"
	I0914 18:13:43.728409   63448 default_sa.go:55] duration metric: took 2.691817ms for default service account to be created ...
	I0914 18:13:43.728417   63448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:43.732884   63448 system_pods.go:86] 8 kube-system pods found
	I0914 18:13:43.732913   63448 system_pods.go:89] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.732918   63448 system_pods.go:89] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.732922   63448 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.732926   63448 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.732931   63448 system_pods.go:89] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.732935   63448 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.732942   63448 system_pods.go:89] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.732947   63448 system_pods.go:89] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.732954   63448 system_pods.go:126] duration metric: took 4.531761ms to wait for k8s-apps to be running ...
	I0914 18:13:43.732960   63448 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:43.733001   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:43.749535   63448 system_svc.go:56] duration metric: took 16.566498ms WaitForService to wait for kubelet
	I0914 18:13:43.749567   63448 kubeadm.go:582] duration metric: took 4m22.053742257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:43.749587   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:43.752493   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:43.752514   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:43.752523   63448 node_conditions.go:105] duration metric: took 2.931821ms to run NodePressure ...
	I0914 18:13:43.752534   63448 start.go:241] waiting for startup goroutines ...
	I0914 18:13:43.752548   63448 start.go:246] waiting for cluster config update ...
	I0914 18:13:43.752560   63448 start.go:255] writing updated cluster config ...
	I0914 18:13:43.752815   63448 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:43.803181   63448 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:43.805150   63448 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-243449" cluster and "default" namespace by default
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.103780   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:45.603666   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:47.603988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:50.104811   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:52.604411   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:55.103339   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:57.103716   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:59.603423   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:00.097180   62207 pod_ready.go:82] duration metric: took 4m0.000345486s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	E0914 18:14:00.097209   62207 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:14:00.097230   62207 pod_ready.go:39] duration metric: took 4m11.039838973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:00.097260   62207 kubeadm.go:597] duration metric: took 4m18.345876583s to restartPrimaryControlPlane
	W0914 18:14:00.097328   62207 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:14:00.097360   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:26.392001   62207 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.294613232s)
	I0914 18:14:26.392082   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:26.410558   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:14:26.421178   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:26.430786   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:26.430808   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:26.430858   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:26.440193   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:26.440253   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:26.449848   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:26.459589   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:26.459651   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:26.469556   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.478722   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:26.478782   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.488694   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:26.498478   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:26.498542   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:26.509455   62207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:26.552295   62207 kubeadm.go:310] W0914 18:14:26.530603    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.552908   62207 kubeadm.go:310] W0914 18:14:26.531307    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.665962   62207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:35.406074   62207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:14:35.406150   62207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:35.406251   62207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:35.406372   62207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:35.406503   62207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:14:35.406611   62207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:35.408167   62207 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:35.408257   62207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:35.408337   62207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:35.408451   62207 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:35.408550   62207 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:35.408655   62207 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:35.408733   62207 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:35.408823   62207 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:35.408916   62207 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:35.409022   62207 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:35.409133   62207 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:35.409176   62207 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:35.409225   62207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:35.409269   62207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:35.409328   62207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:14:35.409374   62207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:35.409440   62207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:35.409507   62207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:35.409633   62207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:35.409734   62207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:35.411984   62207 out.go:235]   - Booting up control plane ...
	I0914 18:14:35.412099   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:35.412212   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:35.412276   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:35.412371   62207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:35.412444   62207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:35.412479   62207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:35.412597   62207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:14:35.412686   62207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:14:35.412737   62207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002422188s
	I0914 18:14:35.412801   62207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:14:35.412863   62207 kubeadm.go:310] [api-check] The API server is healthy after 5.002046359s
	I0914 18:14:35.412986   62207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:14:35.413129   62207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:14:35.413208   62207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:14:35.413427   62207 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-168587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:14:35.413510   62207 kubeadm.go:310] [bootstrap-token] Using token: 2jk8ol.l80z6l7tm2nt4pl7
	I0914 18:14:35.414838   62207 out.go:235]   - Configuring RBAC rules ...
	I0914 18:14:35.414968   62207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:14:35.415069   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:14:35.415291   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:14:35.415482   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:14:35.415615   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:14:35.415725   62207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:14:35.415867   62207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:14:35.415930   62207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:14:35.415990   62207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:14:35.415999   62207 kubeadm.go:310] 
	I0914 18:14:35.416077   62207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:14:35.416086   62207 kubeadm.go:310] 
	I0914 18:14:35.416187   62207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:14:35.416198   62207 kubeadm.go:310] 
	I0914 18:14:35.416232   62207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:14:35.416314   62207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:14:35.416388   62207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:14:35.416397   62207 kubeadm.go:310] 
	I0914 18:14:35.416474   62207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:14:35.416484   62207 kubeadm.go:310] 
	I0914 18:14:35.416525   62207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:14:35.416529   62207 kubeadm.go:310] 
	I0914 18:14:35.416597   62207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:14:35.416701   62207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:14:35.416781   62207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:14:35.416796   62207 kubeadm.go:310] 
	I0914 18:14:35.416899   62207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:14:35.416998   62207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:14:35.417007   62207 kubeadm.go:310] 
	I0914 18:14:35.417125   62207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417247   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:14:35.417272   62207 kubeadm.go:310] 	--control-plane 
	I0914 18:14:35.417276   62207 kubeadm.go:310] 
	I0914 18:14:35.417399   62207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:14:35.417422   62207 kubeadm.go:310] 
	I0914 18:14:35.417530   62207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417686   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:14:35.417705   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:14:35.417713   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:14:35.420023   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:14:35.421095   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:14:35.432619   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:14:35.451720   62207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:14:35.451790   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:35.451836   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-168587 minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=no-preload-168587 minikube.k8s.io/primary=true
	I0914 18:14:35.654681   62207 ops.go:34] apiserver oom_adj: -16
	I0914 18:14:35.654714   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.155376   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.655468   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.155741   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.655416   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.154935   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.655465   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.740860   62207 kubeadm.go:1113] duration metric: took 3.289121705s to wait for elevateKubeSystemPrivileges
	I0914 18:14:38.740912   62207 kubeadm.go:394] duration metric: took 4m57.036377829s to StartCluster
	I0914 18:14:38.740939   62207 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.741029   62207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:14:38.742754   62207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.742977   62207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:14:38.743138   62207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:14:38.743260   62207 addons.go:69] Setting storage-provisioner=true in profile "no-preload-168587"
	I0914 18:14:38.743271   62207 addons.go:69] Setting default-storageclass=true in profile "no-preload-168587"
	I0914 18:14:38.743282   62207 addons.go:234] Setting addon storage-provisioner=true in "no-preload-168587"
	I0914 18:14:38.743290   62207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-168587"
	W0914 18:14:38.743295   62207 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:14:38.743306   62207 addons.go:69] Setting metrics-server=true in profile "no-preload-168587"
	I0914 18:14:38.743329   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743334   62207 addons.go:234] Setting addon metrics-server=true in "no-preload-168587"
	I0914 18:14:38.743362   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0914 18:14:38.743365   62207 addons.go:243] addon metrics-server should already be in state true
	I0914 18:14:38.743442   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743814   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743843   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743821   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.744070   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.744427   62207 out.go:177] * Verifying Kubernetes components...
	I0914 18:14:38.745716   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:14:38.760250   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0914 18:14:38.760329   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0914 18:14:38.760788   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.760810   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.761416   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761438   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761581   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761829   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.761980   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.762333   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.762445   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.762495   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.763295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0914 18:14:38.763767   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.764256   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.764285   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.764616   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.765095   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765131   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.765525   62207 addons.go:234] Setting addon default-storageclass=true in "no-preload-168587"
	W0914 18:14:38.765544   62207 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:14:38.765568   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.765798   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765837   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.782208   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0914 18:14:38.782527   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0914 18:14:38.782564   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0914 18:14:38.782679   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782943   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782973   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.783413   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783433   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783566   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783573   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783585   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783956   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.783964   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784444   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.784482   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.784639   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784666   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.784806   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.786340   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.786797   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.788188   62207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:14:38.788195   62207 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:14:38.789239   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:14:38.789254   62207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:14:38.789273   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.789338   62207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:38.789347   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:14:38.789358   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.792968   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793521   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793853   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.793894   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794037   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794097   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.794107   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794258   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794351   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794499   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794531   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794635   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794716   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.794777   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.827254   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 18:14:38.827852   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.828434   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.828460   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.828837   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.829088   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.830820   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.831031   62207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:38.831048   62207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:14:38.831067   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.833822   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834242   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.834282   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834453   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.834641   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.834794   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.834963   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.920627   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:14:38.941951   62207 node_ready.go:35] waiting up to 6m0s for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973102   62207 node_ready.go:49] node "no-preload-168587" has status "Ready":"True"
	I0914 18:14:38.973124   62207 node_ready.go:38] duration metric: took 31.146661ms for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973132   62207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:38.989712   62207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:39.018196   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:14:39.018223   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:14:39.045691   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:39.066249   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:14:39.066277   62207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:14:39.073017   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:39.118360   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.118385   62207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:14:39.195268   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.874924   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.874953   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.874950   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875004   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875398   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875406   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875457   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875466   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875476   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875406   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875430   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875598   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875609   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875631   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875914   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875916   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875934   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875939   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875959   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875966   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.929860   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.929881   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.930191   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.930211   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.139888   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.139918   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140256   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140273   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140282   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.140289   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140608   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140620   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:40.140630   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140646   62207 addons.go:475] Verifying addon metrics-server=true in "no-preload-168587"
	I0914 18:14:40.142461   62207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:14:40.143818   62207 addons.go:510] duration metric: took 1.400695696s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:14:40.996599   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:43.498584   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:45.995938   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:45.995971   62207 pod_ready.go:82] duration metric: took 7.006220602s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:45.995984   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000589   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.000609   62207 pod_ready.go:82] duration metric: took 4.618617ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000620   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004865   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.004886   62207 pod_ready.go:82] duration metric: took 4.259787ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004895   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009225   62207 pod_ready.go:93] pod "kube-proxy-xdj6b" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.009243   62207 pod_ready.go:82] duration metric: took 4.343161ms for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009250   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013312   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.013330   62207 pod_ready.go:82] duration metric: took 4.073817ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013337   62207 pod_ready.go:39] duration metric: took 7.040196066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:46.013358   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:14:46.013403   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:14:46.029881   62207 api_server.go:72] duration metric: took 7.286871398s to wait for apiserver process to appear ...
	I0914 18:14:46.029912   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:14:46.029937   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:14:46.034236   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:14:46.035287   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:14:46.035305   62207 api_server.go:131] duration metric: took 5.385499ms to wait for apiserver health ...
	I0914 18:14:46.035314   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:14:46.196765   62207 system_pods.go:59] 9 kube-system pods found
	I0914 18:14:46.196796   62207 system_pods.go:61] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196804   62207 system_pods.go:61] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196810   62207 system_pods.go:61] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.196816   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.196821   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.196824   62207 system_pods.go:61] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.196827   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.196832   62207 system_pods.go:61] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.196835   62207 system_pods.go:61] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.196842   62207 system_pods.go:74] duration metric: took 161.510322ms to wait for pod list to return data ...
	I0914 18:14:46.196853   62207 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:14:46.394399   62207 default_sa.go:45] found service account: "default"
	I0914 18:14:46.394428   62207 default_sa.go:55] duration metric: took 197.566762ms for default service account to be created ...
	I0914 18:14:46.394443   62207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:14:46.596421   62207 system_pods.go:86] 9 kube-system pods found
	I0914 18:14:46.596454   62207 system_pods.go:89] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596462   62207 system_pods.go:89] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596468   62207 system_pods.go:89] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.596473   62207 system_pods.go:89] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.596477   62207 system_pods.go:89] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.596480   62207 system_pods.go:89] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.596483   62207 system_pods.go:89] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.596502   62207 system_pods.go:89] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.596509   62207 system_pods.go:89] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.596517   62207 system_pods.go:126] duration metric: took 202.067078ms to wait for k8s-apps to be running ...
	I0914 18:14:46.596527   62207 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:14:46.596571   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:46.611796   62207 system_svc.go:56] duration metric: took 15.259464ms WaitForService to wait for kubelet
	I0914 18:14:46.611837   62207 kubeadm.go:582] duration metric: took 7.868833105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:14:46.611858   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:14:46.794731   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:14:46.794758   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:14:46.794767   62207 node_conditions.go:105] duration metric: took 182.903835ms to run NodePressure ...
	I0914 18:14:46.794777   62207 start.go:241] waiting for startup goroutines ...
	I0914 18:14:46.794783   62207 start.go:246] waiting for cluster config update ...
	I0914 18:14:46.794793   62207 start.go:255] writing updated cluster config ...
	I0914 18:14:46.795051   62207 ssh_runner.go:195] Run: rm -f paused
	I0914 18:14:46.845803   62207 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:14:46.847399   62207 out.go:177] * Done! kubectl is now configured to use "no-preload-168587" cluster and "default" namespace by default
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 18:22:38 embed-certs-044534 crio[708]: time="2024-09-14 18:22:38.993592586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338158993568528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1166369-8837-4a07-b27a-275b2f04f680 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:38 embed-certs-044534 crio[708]: time="2024-09-14 18:22:38.994085856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=accfa068-2f42-4ab4-94d4-7aecb36aee22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:38 embed-certs-044534 crio[708]: time="2024-09-14 18:22:38.994149092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=accfa068-2f42-4ab4-94d4-7aecb36aee22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:38 embed-certs-044534 crio[708]: time="2024-09-14 18:22:38.994367207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=accfa068-2f42-4ab4-94d4-7aecb36aee22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.033885354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08e6a369-f9e1-4497-ab99-bb94aa4e9f07 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.034012599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08e6a369-f9e1-4497-ab99-bb94aa4e9f07 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.035609084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=498bf0cf-dcbe-47e2-b812-e8c3401921c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.036076334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338159036051120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=498bf0cf-dcbe-47e2-b812-e8c3401921c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.036796281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1c64c38-d041-4de2-8930-ff7ab3084df7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.036865805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1c64c38-d041-4de2-8930-ff7ab3084df7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.037123255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1c64c38-d041-4de2-8930-ff7ab3084df7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.074766060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d3234c0-da75-44bd-b1a9-1abd3308ea0f name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.074855247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d3234c0-da75-44bd-b1a9-1abd3308ea0f name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.076156396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc083bc7-b839-4301-aa66-a24c001cc570 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.076714208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338159076682263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc083bc7-b839-4301-aa66-a24c001cc570 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.077551272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0976bc68-e407-40f7-9a2b-6024ec153cc2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.077635411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0976bc68-e407-40f7-9a2b-6024ec153cc2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.077863505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0976bc68-e407-40f7-9a2b-6024ec153cc2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.119133898Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66345853-c5db-4938-9029-9dfe23d49cb4 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.119310436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66345853-c5db-4938-9029-9dfe23d49cb4 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.121061040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57ce1d04-a62f-41b5-b12d-84a0eef67cef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.121814701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338159121785317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57ce1d04-a62f-41b5-b12d-84a0eef67cef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.122356869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d63e81e1-d72d-4157-8a44-0cb3972a28db name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.122407887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d63e81e1-d72d-4157-8a44-0cb3972a28db name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:39 embed-certs-044534 crio[708]: time="2024-09-14 18:22:39.122689033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d63e81e1-d72d-4157-8a44-0cb3972a28db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b14b9a711037       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   66f03c0f1657c       storage-provisioner
	b95b9d14a3861       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   016fce22989ba       coredns-7c65d6cfc9-9j6sv
	de161a601677d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   2d6b3768f9a20       coredns-7c65d6cfc9-67dsl
	40119c8929f7c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   3d14f1eab7b0a       kube-proxy-26fx6
	0ccdf8eadda64       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   e9867cb9d893b       etcd-embed-certs-044534
	981d6d37d393f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   0cfe17366efed       kube-scheduler-embed-certs-044534
	5752f872d26f4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   a1ddfd902a156       kube-apiserver-embed-certs-044534
	d03e829cc4e30       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   046b0a9a02164       kube-controller-manager-embed-certs-044534
	bdca84f70f074       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   06882a4abff9d       kube-apiserver-embed-certs-044534
	
	
	==> coredns [b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-044534
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-044534
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=embed-certs-044534
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-044534
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:22:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:18:39 +0000   Sat, 14 Sep 2024 18:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:18:39 +0000   Sat, 14 Sep 2024 18:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:18:39 +0000   Sat, 14 Sep 2024 18:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:18:39 +0000   Sat, 14 Sep 2024 18:13:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.126
	  Hostname:    embed-certs-044534
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa46e8db94cc40c4b0205a2a3853f385
	  System UUID:                fa46e8db-94cc-40c4-b020-5a2a3853f385
	  Boot ID:                    f5ab6040-5102-4ce0-acbf-20cfd0e231bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-67dsl                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-9j6sv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-embed-certs-044534                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-embed-certs-044534             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-embed-certs-044534    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-26fx6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-embed-certs-044534             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-rrfnt               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node embed-certs-044534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node embed-certs-044534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node embed-certs-044534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node embed-certs-044534 event: Registered Node embed-certs-044534 in Controller
	
	
	==> dmesg <==
	[  +0.051128] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036985] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.772461] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.959939] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579698] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.245212] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.062058] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078736] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.195418] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.127527] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.293220] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[  +4.057528] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.006304] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.064752] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.545228] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.949076] kauditd_printk_skb: 85 callbacks suppressed
	[Sep14 18:13] systemd-fstab-generator[2570]: Ignoring "noauto" option for root device
	[  +0.065962] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.985638] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +0.085570] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.783605] systemd-fstab-generator[3019]: Ignoring "noauto" option for root device
	[  +0.785968] kauditd_printk_skb: 34 callbacks suppressed
	[Sep14 18:14] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731] <==
	{"level":"info","ts":"2024-09-14T18:13:17.426186Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:13:17.426287Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:13:17.426303Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-14T18:13:17.429239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 switched to configuration voters=(1166993568952305682)"}
	{"level":"info","ts":"2024-09-14T18:13:17.431137Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4ddc981c9374e971","local-member-id":"1031fe77cc914812","added-peer-id":"1031fe77cc914812","added-peer-peer-urls":["https://192.168.50.126:2380"]}
	{"level":"info","ts":"2024-09-14T18:13:18.269702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T18:13:18.269863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T18:13:18.269906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 received MsgPreVoteResp from 1031fe77cc914812 at term 1"}
	{"level":"info","ts":"2024-09-14T18:13:18.269944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.270062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 received MsgVoteResp from 1031fe77cc914812 at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.270107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.270140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1031fe77cc914812 elected leader 1031fe77cc914812 at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.271572Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1031fe77cc914812","local-member-attributes":"{Name:embed-certs-044534 ClientURLs:[https://192.168.50.126:2379]}","request-path":"/0/members/1031fe77cc914812/attributes","cluster-id":"4ddc981c9374e971","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T18:13:18.271788Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.271921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:13:18.272586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T18:13:18.272656Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T18:13:18.272719Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4ddc981c9374e971","local-member-id":"1031fe77cc914812","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.272834Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.272879Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.272916Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:13:18.273724Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:13:18.274558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.126:2379"}
	{"level":"info","ts":"2024-09-14T18:13:18.280661Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:13:18.281459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:22:39 up 14 min,  0 users,  load average: 0.05, 0.25, 0.22
	Linux embed-certs-044534 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9] <==
	W0914 18:18:20.652076       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:18:20.652211       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:18:20.653101       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:18:20.654268       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:19:20.654298       1 handler_proxy.go:99] no RequestInfo found in the context
	W0914 18:19:20.654647       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:19:20.654752       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0914 18:19:20.654820       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:19:20.656012       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:19:20.656104       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:21:20.656571       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:21:20.656759       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:21:20.656872       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:21:20.656886       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:21:20.657912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:21:20.658035       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9] <==
	W0914 18:13:12.564091       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.664764       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.685765       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.785670       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.826266       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.871831       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.891300       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.982121       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.009797       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.055644       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.073390       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.076776       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.097716       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.130255       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.154393       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.218489       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.285422       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.355212       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.444413       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.544881       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.555482       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.566884       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.685456       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.992142       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:14.676904       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d] <==
	E0914 18:17:26.627602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:17:27.103266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:17:56.633796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:17:57.118553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:18:26.640437       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:18:27.127279       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:18:39.422715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-044534"
	E0914 18:18:56.647678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:18:57.135758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:19:26.654870       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:19:27.144279       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:19:35.396819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.923µs"
	I0914 18:19:46.396579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.308µs"
	E0914 18:19:56.661489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:19:57.162614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:20:26.668542       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:20:27.173763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:20:56.675756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:20:57.184419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:21:26.682485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:21:27.195140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:21:56.690087       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:21:57.211740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:22:26.697027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:22:27.219778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 18:13:28.911596       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 18:13:28.998254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.126"]
	E0914 18:13:28.998344       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 18:13:29.128867       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 18:13:29.129043       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 18:13:29.129070       1 server_linux.go:169] "Using iptables Proxier"
	I0914 18:13:29.133612       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 18:13:29.133941       1 server.go:483] "Version info" version="v1.31.1"
	I0914 18:13:29.133969       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:13:29.135773       1 config.go:199] "Starting service config controller"
	I0914 18:13:29.135834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 18:13:29.135883       1 config.go:105] "Starting endpoint slice config controller"
	I0914 18:13:29.135889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 18:13:29.137505       1 config.go:328] "Starting node config controller"
	I0914 18:13:29.137581       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 18:13:29.236342       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 18:13:29.236405       1 shared_informer.go:320] Caches are synced for service config
	I0914 18:13:29.237797       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a] <==
	W0914 18:13:19.718559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:13:19.722146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:19.718590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:13:19.722229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:19.718646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:13:19.722290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.533080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:13:20.533119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.568599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:13:20.568656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.589321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:13:20.589524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.744197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:13:20.744374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.844248       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:13:20.845514       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 18:13:20.956587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:13:20.956747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.959907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 18:13:20.960083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.981948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:13:20.982197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:21.014514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:13:21.014728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 18:13:23.108805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 18:21:31 embed-certs-044534 kubelet[2900]: E0914 18:21:31.381215    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:21:32 embed-certs-044534 kubelet[2900]: E0914 18:21:32.544061    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338092543680643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:32 embed-certs-044534 kubelet[2900]: E0914 18:21:32.544524    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338092543680643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:42 embed-certs-044534 kubelet[2900]: E0914 18:21:42.381749    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:21:42 embed-certs-044534 kubelet[2900]: E0914 18:21:42.546394    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338102545973497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:42 embed-certs-044534 kubelet[2900]: E0914 18:21:42.546485    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338102545973497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:52 embed-certs-044534 kubelet[2900]: E0914 18:21:52.548173    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338112547742599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:52 embed-certs-044534 kubelet[2900]: E0914 18:21:52.548458    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338112547742599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:57 embed-certs-044534 kubelet[2900]: E0914 18:21:57.381947    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:22:02 embed-certs-044534 kubelet[2900]: E0914 18:22:02.550203    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338122549578150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:02 embed-certs-044534 kubelet[2900]: E0914 18:22:02.550518    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338122549578150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:08 embed-certs-044534 kubelet[2900]: E0914 18:22:08.383474    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:22:12 embed-certs-044534 kubelet[2900]: E0914 18:22:12.552735    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338132552316590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:12 embed-certs-044534 kubelet[2900]: E0914 18:22:12.553172    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338132552316590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:19 embed-certs-044534 kubelet[2900]: E0914 18:22:19.381755    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]: E0914 18:22:22.405766    2900 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]: E0914 18:22:22.555266    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338142554740196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:22 embed-certs-044534 kubelet[2900]: E0914 18:22:22.555295    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338142554740196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:30 embed-certs-044534 kubelet[2900]: E0914 18:22:30.381601    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:22:32 embed-certs-044534 kubelet[2900]: E0914 18:22:32.557619    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338152557208754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:32 embed-certs-044534 kubelet[2900]: E0914 18:22:32.558105    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338152557208754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7] <==
	I0914 18:13:29.146841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:13:29.157570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:13:29.157723       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:13:29.169588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:13:29.170835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-044534_7c3d1e0d-2f84-4629-82f0-d1eff9a375d1!
	I0914 18:13:29.172796       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e386ff4-ba65-44bc-ad68-ca726a1bd2ed", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-044534_7c3d1e0d-2f84-4629-82f0-d1eff9a375d1 became leader
	I0914 18:13:29.271496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-044534_7c3d1e0d-2f84-4629-82f0-d1eff9a375d1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-044534 -n embed-certs-044534
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-044534 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rrfnt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-044534 describe pod metrics-server-6867b74b74-rrfnt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-044534 describe pod metrics-server-6867b74b74-rrfnt: exit status 1 (99.46787ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rrfnt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-044534 describe pod metrics-server-6867b74b74-rrfnt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 18:14:04.947437   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-14 18:22:44.335275275 +0000 UTC m=+5938.857009254
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-243449 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-243449 logs -n 25: (2.145083673s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-319416                              | stopped-upgrade-319416       | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:06:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:06:40.299903   63448 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:40.300039   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300049   63448 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:40.300054   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300240   63448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:06:40.300801   63448 out.go:352] Setting JSON to false
	I0914 18:06:40.301779   63448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6544,"bootTime":1726330656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:06:40.301879   63448 start.go:139] virtualization: kvm guest
	I0914 18:06:40.303963   63448 out.go:177] * [default-k8s-diff-port-243449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:06:40.305394   63448 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:06:40.305429   63448 notify.go:220] Checking for updates...
	I0914 18:06:40.308148   63448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:06:40.309226   63448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:06:40.310360   63448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:06:40.311509   63448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:06:40.312543   63448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:06:40.314418   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:06:40.315063   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.315154   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.330033   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0914 18:06:40.330502   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.331014   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.331035   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.331372   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.331519   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.331729   63448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:06:40.332043   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.332089   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.346598   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0914 18:06:40.347021   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.347501   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.347536   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.347863   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.348042   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.380416   63448 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:06:40.381578   63448 start.go:297] selected driver: kvm2
	I0914 18:06:40.381589   63448 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.381693   63448 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:06:40.382390   63448 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.382478   63448 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:06:40.397521   63448 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:06:40.397921   63448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:06:40.397959   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:06:40.398002   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:06:40.398040   63448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.398145   63448 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.399920   63448 out.go:177] * Starting "default-k8s-diff-port-243449" primary control-plane node in "default-k8s-diff-port-243449" cluster
	I0914 18:06:39.170425   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:40.400913   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:06:40.400954   63448 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:06:40.400966   63448 cache.go:56] Caching tarball of preloaded images
	I0914 18:06:40.401038   63448 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:06:40.401055   63448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:06:40.401185   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:06:40.401421   63448 start.go:360] acquireMachinesLock for default-k8s-diff-port-243449: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:06:45.250426   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:48.322531   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:54.402441   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:57.474440   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:03.554541   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:06.626472   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:12.706430   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:15.778448   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:21.858453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:24.930473   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:31.010432   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:34.082423   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:40.162417   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:43.234501   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:49.314533   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:52.386453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:58.466444   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:01.538476   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:04.546206   62554 start.go:364] duration metric: took 3m59.524513317s to acquireMachinesLock for "embed-certs-044534"
	I0914 18:08:04.546263   62554 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:04.546275   62554 fix.go:54] fixHost starting: 
	I0914 18:08:04.546585   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:04.546636   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:04.562182   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0914 18:08:04.562704   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:04.563264   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:08:04.563300   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:04.563714   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:04.563947   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:04.564131   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:08:04.566043   62554 fix.go:112] recreateIfNeeded on embed-certs-044534: state=Stopped err=<nil>
	I0914 18:08:04.566073   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	W0914 18:08:04.566289   62554 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:04.567993   62554 out.go:177] * Restarting existing kvm2 VM for "embed-certs-044534" ...
	I0914 18:08:04.570182   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Start
	I0914 18:08:04.570431   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring networks are active...
	I0914 18:08:04.571374   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network default is active
	I0914 18:08:04.571748   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network mk-embed-certs-044534 is active
	I0914 18:08:04.572124   62554 main.go:141] libmachine: (embed-certs-044534) Getting domain xml...
	I0914 18:08:04.572852   62554 main.go:141] libmachine: (embed-certs-044534) Creating domain...
	I0914 18:08:04.540924   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:04.540957   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541310   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:08:04.541335   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541586   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:08:04.546055   62207 machine.go:96] duration metric: took 4m34.63489942s to provisionDockerMachine
	I0914 18:08:04.546096   62207 fix.go:56] duration metric: took 4m34.662932355s for fixHost
	I0914 18:08:04.546102   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 4m34.66297244s
	W0914 18:08:04.546122   62207 start.go:714] error starting host: provision: host is not running
	W0914 18:08:04.546220   62207 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 18:08:04.546231   62207 start.go:729] Will try again in 5 seconds ...
	I0914 18:08:05.812076   62554 main.go:141] libmachine: (embed-certs-044534) Waiting to get IP...
	I0914 18:08:05.812955   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:05.813302   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:05.813380   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:05.813279   63779 retry.go:31] will retry after 298.8389ms: waiting for machine to come up
	I0914 18:08:06.114130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.114575   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.114604   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.114530   63779 retry.go:31] will retry after 359.694721ms: waiting for machine to come up
	I0914 18:08:06.476183   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.476801   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.476828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.476745   63779 retry.go:31] will retry after 425.650219ms: waiting for machine to come up
	I0914 18:08:06.904358   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.904794   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.904816   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.904749   63779 retry.go:31] will retry after 433.157325ms: waiting for machine to come up
	I0914 18:08:07.339139   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.339578   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.339602   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.339512   63779 retry.go:31] will retry after 547.817102ms: waiting for machine to come up
	I0914 18:08:07.889390   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.889888   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.889993   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.889820   63779 retry.go:31] will retry after 603.749753ms: waiting for machine to come up
	I0914 18:08:08.495673   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:08.496047   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:08.496076   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:08.495995   63779 retry.go:31] will retry after 831.027535ms: waiting for machine to come up
	I0914 18:08:09.329209   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:09.329622   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:09.329643   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:09.329591   63779 retry.go:31] will retry after 1.429850518s: waiting for machine to come up
	I0914 18:08:09.548738   62207 start.go:360] acquireMachinesLock for no-preload-168587: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:10.761510   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:10.761884   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:10.761915   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:10.761839   63779 retry.go:31] will retry after 1.146619754s: waiting for machine to come up
	I0914 18:08:11.910130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:11.910542   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:11.910568   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:11.910500   63779 retry.go:31] will retry after 1.582382319s: waiting for machine to come up
	I0914 18:08:13.495352   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:13.495852   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:13.495872   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:13.495808   63779 retry.go:31] will retry after 2.117717335s: waiting for machine to come up
	I0914 18:08:15.615461   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:15.615896   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:15.615918   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:15.615846   63779 retry.go:31] will retry after 3.071486865s: waiting for machine to come up
	I0914 18:08:18.691109   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:18.691572   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:18.691605   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:18.691513   63779 retry.go:31] will retry after 4.250544955s: waiting for machine to come up
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:22.946247   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946662   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has current primary IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946687   62554 main.go:141] libmachine: (embed-certs-044534) Found IP for machine: 192.168.50.126
	I0914 18:08:22.946700   62554 main.go:141] libmachine: (embed-certs-044534) Reserving static IP address...
	I0914 18:08:22.947052   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.947068   62554 main.go:141] libmachine: (embed-certs-044534) Reserved static IP address: 192.168.50.126
	I0914 18:08:22.947080   62554 main.go:141] libmachine: (embed-certs-044534) DBG | skip adding static IP to network mk-embed-certs-044534 - found existing host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"}
	I0914 18:08:22.947093   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Getting to WaitForSSH function...
	I0914 18:08:22.947108   62554 main.go:141] libmachine: (embed-certs-044534) Waiting for SSH to be available...
	I0914 18:08:22.949354   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949623   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.949645   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949798   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH client type: external
	I0914 18:08:22.949822   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa (-rw-------)
	I0914 18:08:22.949886   62554 main.go:141] libmachine: (embed-certs-044534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:22.949911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | About to run SSH command:
	I0914 18:08:22.949926   62554 main.go:141] libmachine: (embed-certs-044534) DBG | exit 0
	I0914 18:08:23.074248   62554 main.go:141] libmachine: (embed-certs-044534) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:23.074559   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetConfigRaw
	I0914 18:08:23.075190   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.077682   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078007   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.078040   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078309   62554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:08:23.078494   62554 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:23.078510   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.078723   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.081444   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.081846   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.081891   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.082026   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.082209   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082398   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082573   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.082739   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.082961   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.082984   62554 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:23.186143   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:23.186193   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186424   62554 buildroot.go:166] provisioning hostname "embed-certs-044534"
	I0914 18:08:23.186447   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186622   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.189085   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189453   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.189482   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189615   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.189802   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190032   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190168   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.190422   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.190587   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.190601   62554 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-044534 && echo "embed-certs-044534" | sudo tee /etc/hostname
	I0914 18:08:23.307484   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-044534
	
	I0914 18:08:23.307512   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.310220   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.310664   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310764   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.310969   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311206   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311438   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.311594   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.311802   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.311820   62554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:23.422574   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:23.422603   62554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:23.422623   62554 buildroot.go:174] setting up certificates
	I0914 18:08:23.422634   62554 provision.go:84] configureAuth start
	I0914 18:08:23.422643   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.422905   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.426201   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426557   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.426584   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426745   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.428607   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.428985   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.429016   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.429138   62554 provision.go:143] copyHostCerts
	I0914 18:08:23.429198   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:23.429211   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:23.429295   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:23.429437   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:23.429452   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:23.429498   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:23.429592   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:23.429600   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:23.429626   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:23.429680   62554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044534 san=[127.0.0.1 192.168.50.126 embed-certs-044534 localhost minikube]
	I0914 18:08:23.538590   62554 provision.go:177] copyRemoteCerts
	I0914 18:08:23.538662   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:23.538689   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.541366   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541723   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.541746   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.542120   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.542303   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.542413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.623698   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:23.647378   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 18:08:23.671327   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:08:23.694570   62554 provision.go:87] duration metric: took 271.923979ms to configureAuth
	I0914 18:08:23.694598   62554 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:23.694779   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:08:23.694868   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.697467   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.697828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.697862   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.698042   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.698249   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698421   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698571   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.698692   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.698945   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.698963   62554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:23.911661   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:23.911697   62554 machine.go:96] duration metric: took 833.189197ms to provisionDockerMachine
	I0914 18:08:23.911712   62554 start.go:293] postStartSetup for "embed-certs-044534" (driver="kvm2")
	I0914 18:08:23.911726   62554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:23.911751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.912134   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:23.912169   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.914579   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.914974   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.915011   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.915121   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.915322   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.915582   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.915710   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.996910   62554 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:24.000900   62554 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:24.000926   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:24.000998   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:24.001099   62554 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:24.001222   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:24.010496   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:24.033377   62554 start.go:296] duration metric: took 121.65145ms for postStartSetup
	I0914 18:08:24.033414   62554 fix.go:56] duration metric: took 19.487140172s for fixHost
	I0914 18:08:24.033434   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.036188   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036494   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.036524   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036672   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.036886   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037082   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037216   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.037375   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:24.037542   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:24.037554   62554 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:24.142822   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337304.118879777
	
	I0914 18:08:24.142851   62554 fix.go:216] guest clock: 1726337304.118879777
	I0914 18:08:24.142862   62554 fix.go:229] Guest: 2024-09-14 18:08:24.118879777 +0000 UTC Remote: 2024-09-14 18:08:24.03341777 +0000 UTC m=+259.160200473 (delta=85.462007ms)
	I0914 18:08:24.142936   62554 fix.go:200] guest clock delta is within tolerance: 85.462007ms
	I0914 18:08:24.142960   62554 start.go:83] releasing machines lock for "embed-certs-044534", held for 19.596720856s
	I0914 18:08:24.142992   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.143262   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:24.146122   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146501   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.146537   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146711   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147204   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147430   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147532   62554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:24.147589   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.147813   62554 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:24.147839   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.150691   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.150736   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151012   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151056   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151149   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151179   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151431   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151468   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151586   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151772   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151944   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.152034   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.256821   62554 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:24.263249   62554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:24.411996   62554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:24.418685   62554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:24.418759   62554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:24.434541   62554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:24.434569   62554 start.go:495] detecting cgroup driver to use...
	I0914 18:08:24.434655   62554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:24.452550   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:24.467548   62554 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:24.467602   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:24.482556   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:24.497198   62554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:24.625300   62554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:24.805163   62554 docker.go:233] disabling docker service ...
	I0914 18:08:24.805248   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:24.821164   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:24.834886   62554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:24.963694   62554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:25.081720   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:25.097176   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:25.116611   62554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:08:25.116677   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.129500   62554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:25.129586   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.140281   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.150925   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.166139   62554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:25.177340   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.187662   62554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.207019   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.217207   62554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:25.226988   62554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:25.227065   62554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:25.248357   62554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:25.258467   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:25.375359   62554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:25.470389   62554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:25.470470   62554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:25.475526   62554 start.go:563] Will wait 60s for crictl version
	I0914 18:08:25.475589   62554 ssh_runner.go:195] Run: which crictl
	I0914 18:08:25.479131   62554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:25.530371   62554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:25.530461   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.557035   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.586883   62554 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:08:25.588117   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:25.591212   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591600   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:25.591628   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591816   62554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:25.595706   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:25.608009   62554 kubeadm.go:883] updating cluster {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:25.608141   62554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:08:25.608194   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:25.643422   62554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:08:25.643515   62554 ssh_runner.go:195] Run: which lz4
	I0914 18:08:25.647471   62554 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:25.651573   62554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:25.651607   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:08:26.985357   62554 crio.go:462] duration metric: took 1.337911722s to copy over tarball
	I0914 18:08:26.985437   62554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:29.111492   62554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126022567s)
	I0914 18:08:29.111524   62554 crio.go:469] duration metric: took 2.12613646s to extract the tarball
	I0914 18:08:29.111533   62554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:29.148426   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:29.190595   62554 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:08:29.190620   62554 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:08:29.190628   62554 kubeadm.go:934] updating node { 192.168.50.126 8443 v1.31.1 crio true true} ...
	I0914 18:08:29.190751   62554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-044534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:29.190823   62554 ssh_runner.go:195] Run: crio config
	I0914 18:08:29.234785   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:29.234808   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:29.234818   62554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:29.234871   62554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.126 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-044534 NodeName:embed-certs-044534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:08:29.234996   62554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-044534"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:29.235054   62554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:08:29.244554   62554 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:29.244631   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:29.253622   62554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 18:08:29.270046   62554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:29.285751   62554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 18:08:29.303567   62554 ssh_runner.go:195] Run: grep 192.168.50.126	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:29.307335   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:29.319510   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:29.442649   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:29.459657   62554 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534 for IP: 192.168.50.126
	I0914 18:08:29.459687   62554 certs.go:194] generating shared ca certs ...
	I0914 18:08:29.459709   62554 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:29.459908   62554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:29.459976   62554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:29.459995   62554 certs.go:256] generating profile certs ...
	I0914 18:08:29.460166   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/client.key
	I0914 18:08:29.460247   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key.15c978c5
	I0914 18:08:29.460301   62554 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key
	I0914 18:08:29.460447   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:29.460491   62554 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:29.460505   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:29.460537   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:29.460581   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:29.460605   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:29.460649   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:29.461415   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:29.501260   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:29.531940   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:29.577959   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:29.604067   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 18:08:29.635335   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:08:29.658841   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:29.684149   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:08:29.709354   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:29.733812   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:29.758427   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:29.783599   62554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:29.802188   62554 ssh_runner.go:195] Run: openssl version
	I0914 18:08:29.808277   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:29.821167   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825911   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825978   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.832160   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:29.844395   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:29.856943   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861671   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861730   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.867506   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:29.878004   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:29.890322   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.894985   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.895053   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.900837   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:29.911102   62554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:29.915546   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:29.921470   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:29.927238   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:29.933122   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:29.938829   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:29.944811   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:29.950679   62554 kubeadm.go:392] StartCluster: {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:29.950762   62554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:29.950866   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:29.987553   62554 cri.go:89] found id: ""
	I0914 18:08:29.987626   62554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:29.998690   62554 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:29.998713   62554 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:29.998765   62554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:30.009411   62554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:30.010804   62554 kubeconfig.go:125] found "embed-certs-044534" server: "https://192.168.50.126:8443"
	I0914 18:08:30.013635   62554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:30.023903   62554 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.126
	I0914 18:08:30.023937   62554 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:30.023951   62554 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:30.024017   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:30.067767   62554 cri.go:89] found id: ""
	I0914 18:08:30.067842   62554 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:30.087326   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:30.098162   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:30.098180   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:30.098218   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:30.108239   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:30.108296   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:30.118913   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:30.129091   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:30.129172   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:30.139658   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.148838   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:30.148923   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.158386   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:30.167282   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:30.167354   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:30.176443   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:30.185476   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:30.310603   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.243123   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.457657   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.531992   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.625580   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:31.625683   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.125744   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.626056   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.126817   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.146478   62554 api_server.go:72] duration metric: took 1.520896575s to wait for apiserver process to appear ...
	I0914 18:08:33.146517   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:08:33.146543   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:33.147106   62554 api_server.go:269] stopped: https://192.168.50.126:8443/healthz: Get "https://192.168.50.126:8443/healthz": dial tcp 192.168.50.126:8443: connect: connection refused
	I0914 18:08:33.646672   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:35.687169   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.687200   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:35.687221   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:35.737352   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.737410   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:36.146777   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.151156   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.151185   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:36.647380   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.655444   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.655477   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:37.146971   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:37.151233   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:08:37.160642   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:08:37.160671   62554 api_server.go:131] duration metric: took 4.014146932s to wait for apiserver health ...
	I0914 18:08:37.160679   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:37.160686   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:37.162836   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:08:37.164378   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:08:37.183377   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:08:37.210701   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:08:37.222258   62554 system_pods.go:59] 8 kube-system pods found
	I0914 18:08:37.222304   62554 system_pods.go:61] "coredns-7c65d6cfc9-59dm5" [55e67ff8-cf54-41fc-af46-160085787f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:08:37.222316   62554 system_pods.go:61] "etcd-embed-certs-044534" [932ca8e3-a777-4bb3-bdc2-6c1f1d293d4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:08:37.222331   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [f71e6720-c32c-426f-8620-b56eadf5e33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:08:37.222351   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [b93c261f-303f-43bb-8b33-4f97dc287809] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:08:37.222359   62554 system_pods.go:61] "kube-proxy-nkdth" [3762b613-c50f-4ba9-af52-371b139f9b6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:08:37.222368   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [65da2ca2-0405-4726-a2dc-dd13519c336a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:08:37.222377   62554 system_pods.go:61] "metrics-server-6867b74b74-stwfz" [ccc73057-4710-4e41-b643-d793d9b01175] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:08:37.222393   62554 system_pods.go:61] "storage-provisioner" [660fd3e3-ce57-4275-9fe1-bcceba75d8a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:08:37.222405   62554 system_pods.go:74] duration metric: took 11.676128ms to wait for pod list to return data ...
	I0914 18:08:37.222420   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:08:37.227047   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:08:37.227087   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:08:37.227104   62554 node_conditions.go:105] duration metric: took 4.678826ms to run NodePressure ...
	I0914 18:08:37.227124   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:37.510868   62554 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515839   62554 kubeadm.go:739] kubelet initialised
	I0914 18:08:37.515863   62554 kubeadm.go:740] duration metric: took 4.967389ms waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515871   62554 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:08:37.520412   62554 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:39.528469   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:42.028017   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:44.526502   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:45.103061   63448 start.go:364] duration metric: took 2m4.701591278s to acquireMachinesLock for "default-k8s-diff-port-243449"
	I0914 18:08:45.103116   63448 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:45.103124   63448 fix.go:54] fixHost starting: 
	I0914 18:08:45.103555   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:45.103626   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:45.120496   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0914 18:08:45.121098   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:45.122023   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:08:45.122050   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:45.122440   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:45.122631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:08:45.122792   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:08:45.124473   63448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-243449: state=Stopped err=<nil>
	I0914 18:08:45.124500   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	W0914 18:08:45.124633   63448 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:45.126255   63448 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-243449" ...
	I0914 18:08:45.127296   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Start
	I0914 18:08:45.127469   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring networks are active...
	I0914 18:08:45.128415   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network default is active
	I0914 18:08:45.128823   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network mk-default-k8s-diff-port-243449 is active
	I0914 18:08:45.129257   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Getting domain xml...
	I0914 18:08:45.130055   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Creating domain...
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.027201   62554 pod_ready.go:93] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:46.027223   62554 pod_ready.go:82] duration metric: took 8.506784658s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.027232   62554 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043468   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.043499   62554 pod_ready.go:82] duration metric: took 1.016259668s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043513   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050825   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.050853   62554 pod_ready.go:82] duration metric: took 7.332421ms for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050869   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561389   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.561419   62554 pod_ready.go:82] duration metric: took 510.541663ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561434   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568265   62554 pod_ready.go:93] pod "kube-proxy-nkdth" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.568298   62554 pod_ready.go:82] duration metric: took 6.854878ms for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568312   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575898   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:48.575924   62554 pod_ready.go:82] duration metric: took 1.00760412s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575934   62554 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.464001   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting to get IP...
	I0914 18:08:46.465004   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465408   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465512   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.465391   64066 retry.go:31] will retry after 283.185405ms: waiting for machine to come up
	I0914 18:08:46.751155   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751669   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751697   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.751622   64066 retry.go:31] will retry after 307.273139ms: waiting for machine to come up
	I0914 18:08:47.060812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061855   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061889   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.061749   64066 retry.go:31] will retry after 420.077307ms: waiting for machine to come up
	I0914 18:08:47.483188   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483611   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483656   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.483567   64066 retry.go:31] will retry after 562.15435ms: waiting for machine to come up
	I0914 18:08:48.047428   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047971   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.047867   64066 retry.go:31] will retry after 744.523152ms: waiting for machine to come up
	I0914 18:08:48.793959   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794449   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794492   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.794393   64066 retry.go:31] will retry after 813.631617ms: waiting for machine to come up
	I0914 18:08:49.609483   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:49.609904   64066 retry.go:31] will retry after 941.244861ms: waiting for machine to come up
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:50.583860   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:52.826559   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:50.553210   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553735   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553764   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:50.553671   64066 retry.go:31] will retry after 1.107692241s: waiting for machine to come up
	I0914 18:08:51.663218   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663723   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663753   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:51.663681   64066 retry.go:31] will retry after 1.357435642s: waiting for machine to come up
	I0914 18:08:53.022246   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022695   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022726   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:53.022628   64066 retry.go:31] will retry after 2.045434586s: waiting for machine to come up
	I0914 18:08:55.070946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071420   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:55.071362   64066 retry.go:31] will retry after 2.084823885s: waiting for machine to come up
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.084673   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.582709   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:59.583340   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.158301   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158679   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158705   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:57.158640   64066 retry.go:31] will retry after 2.492994369s: waiting for machine to come up
	I0914 18:08:59.654137   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654550   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654585   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:59.654496   64066 retry.go:31] will retry after 3.393327124s: waiting for machine to come up
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.466791   62207 start.go:364] duration metric: took 54.917996405s to acquireMachinesLock for "no-preload-168587"
	I0914 18:09:04.466845   62207 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:09:04.466863   62207 fix.go:54] fixHost starting: 
	I0914 18:09:04.467265   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:04.467303   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:04.485295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 18:09:04.485680   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:04.486195   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:09:04.486221   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:04.486625   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:04.486825   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:04.486985   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:09:04.488546   62207 fix.go:112] recreateIfNeeded on no-preload-168587: state=Stopped err=<nil>
	I0914 18:09:04.488584   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	W0914 18:09:04.488749   62207 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:09:04.491638   62207 out.go:177] * Restarting existing kvm2 VM for "no-preload-168587" ...
	I0914 18:09:02.082684   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:04.582135   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:03.051442   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051882   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has current primary IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051904   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Found IP for machine: 192.168.61.38
	I0914 18:09:03.051946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserving static IP address...
	I0914 18:09:03.052245   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.052269   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | skip adding static IP to network mk-default-k8s-diff-port-243449 - found existing host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"}
	I0914 18:09:03.052280   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserved static IP address: 192.168.61.38
	I0914 18:09:03.052289   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for SSH to be available...
	I0914 18:09:03.052306   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Getting to WaitForSSH function...
	I0914 18:09:03.054154   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054555   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.054596   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054745   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH client type: external
	I0914 18:09:03.054777   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa (-rw-------)
	I0914 18:09:03.054813   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:03.054828   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | About to run SSH command:
	I0914 18:09:03.054841   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | exit 0
	I0914 18:09:03.178065   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:03.178576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetConfigRaw
	I0914 18:09:03.179198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.181829   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182220   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.182242   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182541   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:09:03.182773   63448 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:03.182796   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:03.182992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.185635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186027   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.186056   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186213   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.186416   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186602   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186756   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.186882   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.187123   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.187137   63448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:03.290288   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:03.290332   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290592   63448 buildroot.go:166] provisioning hostname "default-k8s-diff-port-243449"
	I0914 18:09:03.290620   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290779   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.293587   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.293981   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.294012   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.294120   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.294307   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.294708   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.294926   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.294944   63448 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-243449 && echo "default-k8s-diff-port-243449" | sudo tee /etc/hostname
	I0914 18:09:03.418148   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-243449
	
	I0914 18:09:03.418198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.421059   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421501   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.421536   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421733   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.421925   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422075   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.422394   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.422581   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.422609   63448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-243449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-243449/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-243449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:03.538785   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:03.538812   63448 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:03.538851   63448 buildroot.go:174] setting up certificates
	I0914 18:09:03.538866   63448 provision.go:84] configureAuth start
	I0914 18:09:03.538875   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.539230   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.541811   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542129   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.542183   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542393   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.544635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.544933   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.544969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.545099   63448 provision.go:143] copyHostCerts
	I0914 18:09:03.545156   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:03.545167   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:03.545239   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:03.545362   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:03.545374   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:03.545410   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:03.545489   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:03.545498   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:03.545533   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:03.545619   63448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-243449 san=[127.0.0.1 192.168.61.38 default-k8s-diff-port-243449 localhost minikube]
	I0914 18:09:03.858341   63448 provision.go:177] copyRemoteCerts
	I0914 18:09:03.858415   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:03.858453   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.861376   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.861687   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861863   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.862062   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.862231   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.862359   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:03.944043   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:03.968175   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 18:09:03.990621   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:09:04.012163   63448 provision.go:87] duration metric: took 473.28607ms to configureAuth
	I0914 18:09:04.012190   63448 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:04.012364   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:04.012431   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.015021   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015505   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.015553   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015693   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.015866   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016035   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016157   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.016277   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.016479   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.016511   63448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:04.234672   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:04.234697   63448 machine.go:96] duration metric: took 1.051909541s to provisionDockerMachine
	I0914 18:09:04.234710   63448 start.go:293] postStartSetup for "default-k8s-diff-port-243449" (driver="kvm2")
	I0914 18:09:04.234721   63448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:04.234766   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.235108   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:04.235139   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.237583   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.237964   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.237997   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.238237   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.238491   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.238667   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.238798   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.320785   63448 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:04.324837   63448 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:04.324863   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:04.324920   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:04.325001   63448 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:04.325091   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:04.334235   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:04.357310   63448 start.go:296] duration metric: took 122.582935ms for postStartSetup
	I0914 18:09:04.357352   63448 fix.go:56] duration metric: took 19.25422843s for fixHost
	I0914 18:09:04.357373   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.360190   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360574   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.360601   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360774   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.360973   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361163   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361291   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.361479   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.361658   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.361667   63448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:04.466610   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337344.436836920
	
	I0914 18:09:04.466654   63448 fix.go:216] guest clock: 1726337344.436836920
	I0914 18:09:04.466665   63448 fix.go:229] Guest: 2024-09-14 18:09:04.43683692 +0000 UTC Remote: 2024-09-14 18:09:04.357356624 +0000 UTC m=+144.091633354 (delta=79.480296ms)
	I0914 18:09:04.466691   63448 fix.go:200] guest clock delta is within tolerance: 79.480296ms
	I0914 18:09:04.466702   63448 start.go:83] releasing machines lock for "default-k8s-diff-port-243449", held for 19.363604776s
	I0914 18:09:04.466737   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.466992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:04.469873   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470148   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.470198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470364   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.470877   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471098   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471215   63448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:04.471270   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.471322   63448 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:04.471346   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.474023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474144   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474374   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474471   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474616   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474637   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.474816   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474996   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474987   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.475136   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.475274   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.587233   63448 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:04.593065   63448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:04.738721   63448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:04.745472   63448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:04.745539   63448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:04.765742   63448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:04.765806   63448 start.go:495] detecting cgroup driver to use...
	I0914 18:09:04.765909   63448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:04.782234   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:04.797259   63448 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:04.797322   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:04.811794   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:04.826487   63448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:04.953417   63448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:05.102410   63448 docker.go:233] disabling docker service ...
	I0914 18:09:05.102491   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:05.117443   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:05.131147   63448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:05.278483   63448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.401195   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:05.415794   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:05.434594   63448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:05.434660   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.445566   63448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:05.445643   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.456690   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.468044   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.479719   63448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:05.491019   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.501739   63448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.520582   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.531469   63448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:05.541741   63448 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:05.541809   63448 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:05.561648   63448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:05.571882   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:05.706592   63448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:05.811522   63448 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:05.811599   63448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:05.816676   63448 start.go:563] Will wait 60s for crictl version
	I0914 18:09:05.816745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:09:05.820367   63448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:05.862564   63448 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:05.862637   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.893106   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.927136   63448 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:04.492847   62207 main.go:141] libmachine: (no-preload-168587) Calling .Start
	I0914 18:09:04.493070   62207 main.go:141] libmachine: (no-preload-168587) Ensuring networks are active...
	I0914 18:09:04.493844   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network default is active
	I0914 18:09:04.494193   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network mk-no-preload-168587 is active
	I0914 18:09:04.494614   62207 main.go:141] libmachine: (no-preload-168587) Getting domain xml...
	I0914 18:09:04.495434   62207 main.go:141] libmachine: (no-preload-168587) Creating domain...
	I0914 18:09:05.801470   62207 main.go:141] libmachine: (no-preload-168587) Waiting to get IP...
	I0914 18:09:05.802621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:05.803215   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:05.803351   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:05.803229   64231 retry.go:31] will retry after 206.528002ms: waiting for machine to come up
	I0914 18:09:06.011556   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.012027   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.012063   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.011977   64231 retry.go:31] will retry after 252.283679ms: waiting for machine to come up
	I0914 18:09:06.266621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.267145   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.267178   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.267093   64231 retry.go:31] will retry after 376.426781ms: waiting for machine to come up
	I0914 18:09:06.644639   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.645212   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.645245   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.645161   64231 retry.go:31] will retry after 518.904946ms: waiting for machine to come up
	I0914 18:09:06.584604   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:09.085179   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:05.928171   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:05.931131   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931584   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:05.931662   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931826   63448 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:05.935729   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:05.947741   63448 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:05.947872   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:05.947935   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:05.984371   63448 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:05.984473   63448 ssh_runner.go:195] Run: which lz4
	I0914 18:09:05.988311   63448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:09:05.992088   63448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:09:05.992123   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:09:07.311157   63448 crio.go:462] duration metric: took 1.322885925s to copy over tarball
	I0914 18:09:07.311297   63448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:09:09.472639   63448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161311106s)
	I0914 18:09:09.472663   63448 crio.go:469] duration metric: took 2.161473132s to extract the tarball
	I0914 18:09:09.472670   63448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:09:09.508740   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:09.554508   63448 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:09:09.554533   63448 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:09:09.554548   63448 kubeadm.go:934] updating node { 192.168.61.38 8444 v1.31.1 crio true true} ...
	I0914 18:09:09.554657   63448 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-243449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:09.554722   63448 ssh_runner.go:195] Run: crio config
	I0914 18:09:09.603693   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:09.603715   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:09.603727   63448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:09.603745   63448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-243449 NodeName:default-k8s-diff-port-243449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:09.603879   63448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-243449"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:09.603935   63448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:09.613786   63448 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:09.613863   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:09.623172   63448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0914 18:09:09.641437   63448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:09.657677   63448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:09:09.675042   63448 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:09.678885   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:09.694466   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:09.823504   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:09.840638   63448 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449 for IP: 192.168.61.38
	I0914 18:09:09.840658   63448 certs.go:194] generating shared ca certs ...
	I0914 18:09:09.840677   63448 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:09.840827   63448 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:09.840869   63448 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:09.840879   63448 certs.go:256] generating profile certs ...
	I0914 18:09:09.841046   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/client.key
	I0914 18:09:09.841147   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key.68770133
	I0914 18:09:09.841231   63448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key
	I0914 18:09:09.841342   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:09.841370   63448 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:09.841377   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:09.841398   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:09.841425   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:09.841447   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:09.841499   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:09.842211   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:09.883406   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:09.914134   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:09.941343   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:09.990870   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 18:09:10.040821   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:10.065238   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:10.089901   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:09:10.114440   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:10.138963   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:10.162828   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:10.185702   63448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:10.201251   63448 ssh_runner.go:195] Run: openssl version
	I0914 18:09:10.206904   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:10.216966   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221437   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221506   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.227033   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:10.237039   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:10.247244   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251434   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251494   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.257187   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:10.267490   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:10.277622   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281705   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281789   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.287013   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:10.296942   63448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.165576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.166187   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.166219   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.166125   64231 retry.go:31] will retry after 631.376012ms: waiting for machine to come up
	I0914 18:09:07.798978   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.799450   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.799478   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.799404   64231 retry.go:31] will retry after 668.764795ms: waiting for machine to come up
	I0914 18:09:08.470207   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:08.470613   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:08.470640   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:08.470559   64231 retry.go:31] will retry after 943.595216ms: waiting for machine to come up
	I0914 18:09:09.415274   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:09.415721   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:09.415751   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:09.415675   64231 retry.go:31] will retry after 956.638818ms: waiting for machine to come up
	I0914 18:09:10.374297   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:10.374875   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:10.374902   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:10.374822   64231 retry.go:31] will retry after 1.703915418s: waiting for machine to come up
	I0914 18:09:11.583370   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:14.082919   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:10.301352   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:10.307276   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:10.313391   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:10.319883   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:10.325671   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:10.331445   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:10.336855   63448 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:10.336953   63448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:10.337019   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.372899   63448 cri.go:89] found id: ""
	I0914 18:09:10.372988   63448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:10.386897   63448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:10.386920   63448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:10.386978   63448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:10.399165   63448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:10.400212   63448 kubeconfig.go:125] found "default-k8s-diff-port-243449" server: "https://192.168.61.38:8444"
	I0914 18:09:10.402449   63448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:10.414129   63448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.38
	I0914 18:09:10.414192   63448 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:10.414207   63448 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:10.414276   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.454549   63448 cri.go:89] found id: ""
	I0914 18:09:10.454627   63448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:10.472261   63448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:10.481693   63448 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:10.481724   63448 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:10.481772   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 18:09:10.492205   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:10.492283   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:10.502923   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 18:09:10.511620   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:10.511688   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:10.520978   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.529590   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:10.529652   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.538602   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 18:09:10.546968   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:10.547037   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:10.556280   63448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:10.565471   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:10.670297   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.611646   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.858308   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.942761   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:12.018144   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:12.018251   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.518933   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.019098   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.518297   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.018327   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.033874   63448 api_server.go:72] duration metric: took 2.015718891s to wait for apiserver process to appear ...
	I0914 18:09:14.033902   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:14.033926   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:14.034534   63448 api_server.go:269] stopped: https://192.168.61.38:8444/healthz: Get "https://192.168.61.38:8444/healthz": dial tcp 192.168.61.38:8444: connect: connection refused
	I0914 18:09:14.534065   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.080547   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:12.081149   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:12.081174   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:12.081095   64231 retry.go:31] will retry after 1.634645735s: waiting for machine to come up
	I0914 18:09:13.717239   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:13.717787   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:13.717821   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:13.717731   64231 retry.go:31] will retry after 2.524549426s: waiting for machine to come up
	I0914 18:09:16.244729   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:16.245132   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:16.245162   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:16.245072   64231 retry.go:31] will retry after 2.539965892s: waiting for machine to come up
	I0914 18:09:16.083603   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:18.581965   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:16.427071   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.427109   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.427156   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.440812   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.440848   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.534060   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.593356   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:16.593412   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.034545   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.039094   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.039131   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.534668   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.543018   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.543053   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.034612   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.039042   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.039071   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.534675   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.540612   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.540637   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.034196   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.040397   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.040429   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.535035   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.540910   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.540940   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:20.034275   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:20.038541   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:09:20.044704   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:20.044734   63448 api_server.go:131] duration metric: took 6.010822563s to wait for apiserver health ...
	I0914 18:09:20.044744   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:20.044752   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:20.046616   63448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:20.047724   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:20.058152   63448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:20.077880   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:20.090089   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:20.090135   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:20.090148   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:20.090178   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:20.090192   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:20.090199   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:09:20.090210   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:20.090219   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:20.090226   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:09:20.090236   63448 system_pods.go:74] duration metric: took 12.327834ms to wait for pod list to return data ...
	I0914 18:09:20.090248   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:20.094429   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:20.094455   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:20.094468   63448 node_conditions.go:105] duration metric: took 4.21448ms to run NodePressure ...
	I0914 18:09:20.094486   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.357111   63448 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361447   63448 kubeadm.go:739] kubelet initialised
	I0914 18:09:20.361469   63448 kubeadm.go:740] duration metric: took 4.331134ms waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361479   63448 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:20.367027   63448 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.371669   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371697   63448 pod_ready.go:82] duration metric: took 4.644689ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.371706   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371714   63448 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.376461   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376486   63448 pod_ready.go:82] duration metric: took 4.764316ms for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.376497   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376506   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.380607   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380632   63448 pod_ready.go:82] duration metric: took 4.117696ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.380642   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380649   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.481883   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481920   63448 pod_ready.go:82] duration metric: took 101.262101ms for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.481935   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481965   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.881501   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881541   63448 pod_ready.go:82] duration metric: took 399.559576ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.881556   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881566   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.282414   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282446   63448 pod_ready.go:82] duration metric: took 400.860884ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.282463   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282472   63448 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.681717   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681757   63448 pod_ready.go:82] duration metric: took 399.273892ms for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.681773   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681783   63448 pod_ready.go:39] duration metric: took 1.320292845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:21.681825   63448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:09:21.693644   63448 ops.go:34] apiserver oom_adj: -16
	I0914 18:09:21.693682   63448 kubeadm.go:597] duration metric: took 11.306754096s to restartPrimaryControlPlane
	I0914 18:09:21.693696   63448 kubeadm.go:394] duration metric: took 11.356851178s to StartCluster
	I0914 18:09:21.693719   63448 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.693820   63448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:09:21.695521   63448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.695793   63448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:09:21.695903   63448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:09:21.695982   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:21.696003   63448 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696021   63448 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696029   63448 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696041   63448 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:09:21.696044   63448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-243449"
	I0914 18:09:21.696063   63448 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696094   63448 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696108   63448 addons.go:243] addon metrics-server should already be in state true
	I0914 18:09:21.696149   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696074   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696411   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696455   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696543   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696562   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696693   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696735   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.697719   63448 out.go:177] * Verifying Kubernetes components...
	I0914 18:09:21.699171   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:21.712479   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0914 18:09:21.712563   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0914 18:09:21.713050   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713065   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713585   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713601   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713613   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713633   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713940   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714122   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.714135   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714737   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.714789   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.716503   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 18:09:21.716977   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.717490   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.717514   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.717872   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.718055   63448 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.718075   63448 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:09:21.718105   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.718432   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718484   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.718491   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718527   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.737248   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0914 18:09:21.738874   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.739437   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.739460   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.739865   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.740121   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.742251   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.744281   63448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:21.745631   63448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:21.745656   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:09:21.745682   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.749856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750398   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.750424   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.750886   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.751029   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.751187   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.756605   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0914 18:09:21.756825   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0914 18:09:21.757040   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757293   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757562   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.757588   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758058   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.758301   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.758322   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758325   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.758717   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.759300   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.759342   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.760557   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.762845   63448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:09:18.787883   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:18.788270   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:18.788298   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:18.788225   64231 retry.go:31] will retry after 4.53698887s: waiting for machine to come up
	I0914 18:09:21.764071   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:09:21.764092   63448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:09:21.764116   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.767725   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768255   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.768367   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768503   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.768681   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.768856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.769030   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.776783   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0914 18:09:21.777226   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.777736   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.777754   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.778113   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.778345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.780215   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.780421   63448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:21.780436   63448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:09:21.780458   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.783243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783671   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.783698   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783857   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.784023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.784138   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.784256   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.919649   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:21.945515   63448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:22.020487   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:09:22.020509   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:09:22.041265   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:22.072169   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:09:22.072199   63448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:09:22.112117   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.112148   63448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:09:22.146636   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:22.162248   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.520416   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520448   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.520793   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.520815   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.520831   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520833   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.520840   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.521074   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.521119   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.527992   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.528030   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.528578   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.528581   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.528605   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246463   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084175525s)
	I0914 18:09:23.246520   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246535   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246564   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099889297s)
	I0914 18:09:23.246609   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246621   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246835   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246876   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.246888   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246897   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246910   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246958   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247002   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247021   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.247046   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.247156   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.247192   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247227   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247260   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-243449"
	I0914 18:09:23.250385   63448 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 18:09:20.583600   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.083187   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.251609   63448 addons.go:510] duration metric: took 1.555716144s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 18:09:23.949715   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.327287   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327775   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has current primary IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327803   62207 main.go:141] libmachine: (no-preload-168587) Found IP for machine: 192.168.39.38
	I0914 18:09:23.327822   62207 main.go:141] libmachine: (no-preload-168587) Reserving static IP address...
	I0914 18:09:23.328197   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.328221   62207 main.go:141] libmachine: (no-preload-168587) Reserved static IP address: 192.168.39.38
	I0914 18:09:23.328264   62207 main.go:141] libmachine: (no-preload-168587) DBG | skip adding static IP to network mk-no-preload-168587 - found existing host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"}
	I0914 18:09:23.328283   62207 main.go:141] libmachine: (no-preload-168587) DBG | Getting to WaitForSSH function...
	I0914 18:09:23.328295   62207 main.go:141] libmachine: (no-preload-168587) Waiting for SSH to be available...
	I0914 18:09:23.330598   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.330954   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.330985   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.331105   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH client type: external
	I0914 18:09:23.331132   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa (-rw-------)
	I0914 18:09:23.331184   62207 main.go:141] libmachine: (no-preload-168587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:23.331196   62207 main.go:141] libmachine: (no-preload-168587) DBG | About to run SSH command:
	I0914 18:09:23.331208   62207 main.go:141] libmachine: (no-preload-168587) DBG | exit 0
	I0914 18:09:23.454525   62207 main.go:141] libmachine: (no-preload-168587) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:23.454883   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetConfigRaw
	I0914 18:09:23.455505   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.457696   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458030   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.458069   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458372   62207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:09:23.458611   62207 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:23.458633   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:23.458828   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.461199   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461540   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.461576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461705   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.461895   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462006   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462153   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.462314   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.462477   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.462488   62207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:23.566278   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:23.566310   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566559   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:09:23.566581   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566742   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.569254   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569590   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.569617   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569713   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.569888   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570009   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570174   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.570344   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.570556   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.570575   62207 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-168587 && echo "no-preload-168587" | sudo tee /etc/hostname
	I0914 18:09:23.687805   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-168587
	
	I0914 18:09:23.687848   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.690447   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.690824   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690955   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.691135   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691279   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691427   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.691590   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.691768   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.691790   62207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-168587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-168587/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-168587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:23.805502   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:23.805527   62207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:23.805545   62207 buildroot.go:174] setting up certificates
	I0914 18:09:23.805553   62207 provision.go:84] configureAuth start
	I0914 18:09:23.805561   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.805798   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.808306   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808643   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.808668   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808819   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.811055   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811374   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.811401   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811586   62207 provision.go:143] copyHostCerts
	I0914 18:09:23.811647   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:23.811657   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:23.811712   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:23.811800   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:23.811808   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:23.811829   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:23.811880   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:23.811887   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:23.811908   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:23.811956   62207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.no-preload-168587 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-168587]
	I0914 18:09:24.051868   62207 provision.go:177] copyRemoteCerts
	I0914 18:09:24.051936   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:24.051958   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.054842   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055107   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.055138   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055321   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.055514   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.055664   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.055804   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.140378   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:24.168422   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:09:24.194540   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:09:24.217910   62207 provision.go:87] duration metric: took 412.343545ms to configureAuth
	I0914 18:09:24.217942   62207 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:24.218180   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:24.218255   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.220788   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221216   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.221259   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221408   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.221678   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.221842   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.222033   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.222218   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.222399   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.222417   62207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:24.433203   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:24.433230   62207 machine.go:96] duration metric: took 974.605605ms to provisionDockerMachine
	I0914 18:09:24.433241   62207 start.go:293] postStartSetup for "no-preload-168587" (driver="kvm2")
	I0914 18:09:24.433253   62207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:24.433282   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.433595   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:24.433625   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.436247   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436710   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.436746   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436855   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.437015   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.437189   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.437305   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.516493   62207 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:24.520486   62207 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:24.520518   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:24.520612   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:24.520687   62207 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:24.520775   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:24.530274   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:24.553381   62207 start.go:296] duration metric: took 120.123302ms for postStartSetup
	I0914 18:09:24.553422   62207 fix.go:56] duration metric: took 20.086564499s for fixHost
	I0914 18:09:24.553445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.555832   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556100   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.556133   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556376   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.556605   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556772   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556922   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.557062   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.557275   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.557285   62207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:24.659101   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337364.632455119
	
	I0914 18:09:24.659128   62207 fix.go:216] guest clock: 1726337364.632455119
	I0914 18:09:24.659139   62207 fix.go:229] Guest: 2024-09-14 18:09:24.632455119 +0000 UTC Remote: 2024-09-14 18:09:24.553426386 +0000 UTC m=+357.567907862 (delta=79.028733ms)
	I0914 18:09:24.659165   62207 fix.go:200] guest clock delta is within tolerance: 79.028733ms
	I0914 18:09:24.659171   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 20.192350446s
	I0914 18:09:24.659209   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.659445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:24.662626   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663051   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.663082   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663225   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663802   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663972   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.664063   62207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:24.664114   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.664195   62207 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:24.664221   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.666971   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667255   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667398   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667433   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667555   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.667753   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.667787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667816   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667913   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.667988   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.668058   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.668109   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.668236   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.668356   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.743805   62207 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:24.776583   62207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:24.924635   62207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:24.930891   62207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:24.930979   62207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:24.952228   62207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:24.952258   62207 start.go:495] detecting cgroup driver to use...
	I0914 18:09:24.952344   62207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:24.967770   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:24.983218   62207 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:24.983280   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:24.997311   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:25.011736   62207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:25.135920   62207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:25.323727   62207 docker.go:233] disabling docker service ...
	I0914 18:09:25.323793   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:25.341243   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:25.358703   62207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:25.495826   62207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:25.621684   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:25.637386   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:25.655826   62207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:25.655947   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.669204   62207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:25.669266   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.680265   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.690860   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.702002   62207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:25.713256   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.724125   62207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.742195   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.752680   62207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:25.762842   62207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:25.762920   62207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:25.775680   62207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:25.785190   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:25.907175   62207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:25.995654   62207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:25.995731   62207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:26.000829   62207 start.go:563] Will wait 60s for crictl version
	I0914 18:09:26.000896   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.004522   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:26.041674   62207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:26.041745   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.069091   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.107475   62207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:26.108650   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:26.111782   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112110   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:26.112139   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112279   62207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:26.116339   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:26.128616   62207 kubeadm.go:883] updating cluster {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:26.128755   62207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:26.128796   62207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:26.165175   62207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:26.165197   62207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:09:26.165282   62207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.165301   62207 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 18:09:26.165302   62207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.165276   62207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.165346   62207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.165309   62207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.165443   62207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.165451   62207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.166853   62207 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 18:09:26.166858   62207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.166864   62207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.166873   62207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.166911   62207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.166928   62207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.366393   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.398127   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 18:09:26.401173   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.405861   62207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 18:09:26.405910   62207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.405982   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.410513   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.411414   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.416692   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.417710   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643066   62207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 18:09:26.643114   62207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.643177   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643195   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.643242   62207 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 18:09:26.643278   62207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 18:09:26.643293   62207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 18:09:26.643282   62207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.643307   62207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.643323   62207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.643328   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643351   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643366   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643386   62207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 18:09:26.643412   62207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643436   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.654984   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.655035   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.733881   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.733967   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.769624   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.778708   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.778836   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.778855   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.821344   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.821358   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.899012   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.906693   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.909875   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.916458   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.944355   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.949250   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 18:09:26.949400   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:25.582447   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:28.084142   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:25.949851   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:26.950390   63448 node_ready.go:49] node "default-k8s-diff-port-243449" has status "Ready":"True"
	I0914 18:09:26.950418   63448 node_ready.go:38] duration metric: took 5.004868966s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:26.950430   63448 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:26.956875   63448 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963909   63448 pod_ready.go:93] pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:26.963935   63448 pod_ready.go:82] duration metric: took 7.027533ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963945   63448 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971297   63448 pod_ready.go:93] pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.971327   63448 pod_ready.go:82] duration metric: took 2.007374825s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971340   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977510   63448 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.977535   63448 pod_ready.go:82] duration metric: took 6.18573ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977557   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.035840   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 18:09:27.035956   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:27.040828   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 18:09:27.040939   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 18:09:27.040941   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:27.041026   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:27.048278   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 18:09:27.048345   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 18:09:27.048388   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:27.048390   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 18:09:27.048446   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048423   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 18:09:27.048482   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048431   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:27.052221   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 18:09:27.052401   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 18:09:27.052585   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 18:09:27.330779   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.721998   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.673483443s)
	I0914 18:09:29.722035   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 18:09:29.722064   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722076   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.673496811s)
	I0914 18:09:29.722112   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 18:09:29.722112   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722194   62207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.391387893s)
	I0914 18:09:29.722236   62207 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 18:09:29.722257   62207 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.722297   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:31.485714   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.76356866s)
	I0914 18:09:31.485744   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 18:09:31.485764   62207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485817   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485820   62207 ssh_runner.go:235] Completed: which crictl: (1.763506603s)
	I0914 18:09:31.485862   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:30.583013   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:33.083597   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.985230   63448 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:31.984182   63448 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.984203   63448 pod_ready.go:82] duration metric: took 3.006637599s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.984212   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989786   63448 pod_ready.go:93] pod "kube-proxy-gbkqm" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.989812   63448 pod_ready.go:82] duration metric: took 5.592466ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989823   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994224   63448 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.994246   63448 pod_ready.go:82] duration metric: took 4.414059ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994258   63448 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:34.001035   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.781678   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.295763296s)
	I0914 18:09:34.781783   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:34.781814   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.295968995s)
	I0914 18:09:34.781840   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 18:09:34.781868   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:34.781900   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:36.744459   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.962646981s)
	I0914 18:09:36.744514   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.962587733s)
	I0914 18:09:36.744551   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 18:09:36.744576   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:36.744590   62207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:36.744658   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:35.582596   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.083260   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:36.002284   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.002962   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.848091   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.103407014s)
	I0914 18:09:38.848126   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 18:09:38.848152   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848217   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848153   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.103554199s)
	I0914 18:09:38.848283   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:09:38.848368   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307247   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.459002378s)
	I0914 18:09:40.307287   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 18:09:40.307269   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458886581s)
	I0914 18:09:40.307327   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 18:09:40.307334   62207 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307382   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.958177   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 18:09:40.958222   62207 cache_images.go:123] Successfully loaded all cached images
	I0914 18:09:40.958228   62207 cache_images.go:92] duration metric: took 14.793018447s to LoadCachedImages
	I0914 18:09:40.958241   62207 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.1 crio true true} ...
	I0914 18:09:40.958347   62207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-168587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:40.958415   62207 ssh_runner.go:195] Run: crio config
	I0914 18:09:41.003620   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:41.003643   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:41.003653   62207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:41.003674   62207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-168587 NodeName:no-preload-168587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:41.003850   62207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-168587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:41.003920   62207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:41.014462   62207 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:41.014541   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:41.023964   62207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 18:09:41.040206   62207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:41.055630   62207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 18:09:41.072881   62207 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:41.076449   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:41.090075   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:41.210405   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:41.228173   62207 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587 for IP: 192.168.39.38
	I0914 18:09:41.228197   62207 certs.go:194] generating shared ca certs ...
	I0914 18:09:41.228213   62207 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:41.228383   62207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:41.228443   62207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:41.228457   62207 certs.go:256] generating profile certs ...
	I0914 18:09:41.228586   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.key
	I0914 18:09:41.228667   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key.d11ec263
	I0914 18:09:41.228731   62207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key
	I0914 18:09:41.228889   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:41.228932   62207 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:41.228944   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:41.228976   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:41.229008   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:41.229045   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:41.229102   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:41.229913   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:41.259871   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:41.286359   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:41.315410   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:41.345541   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:09:41.380128   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:41.411130   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:41.442136   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:09:41.464823   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:41.488153   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:41.513788   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:41.537256   62207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:41.553550   62207 ssh_runner.go:195] Run: openssl version
	I0914 18:09:41.559366   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:41.571498   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576889   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576947   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.583651   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:41.594743   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:41.605811   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610034   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610103   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.615810   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:41.627145   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:41.639956   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644647   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644705   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.650281   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:41.662354   62207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:41.667150   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:41.673263   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:41.680660   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:41.687283   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:41.693256   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:41.698969   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:41.704543   62207 kubeadm.go:392] StartCluster: {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:41.704671   62207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:41.704750   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.741255   62207 cri.go:89] found id: ""
	I0914 18:09:41.741354   62207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:41.751360   62207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:41.751377   62207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:41.751417   62207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:41.761492   62207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:41.762591   62207 kubeconfig.go:125] found "no-preload-168587" server: "https://192.168.39.38:8443"
	I0914 18:09:41.764876   62207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:41.774868   62207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0914 18:09:41.774901   62207 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:41.774913   62207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:41.774969   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.810189   62207 cri.go:89] found id: ""
	I0914 18:09:41.810248   62207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:41.827903   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:41.837504   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:41.837532   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:41.837585   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:09:41.846260   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:41.846322   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:41.855350   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:09:41.864096   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:41.864153   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:41.874772   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.885427   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:41.885502   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.897121   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:09:41.906955   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:41.907020   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:41.918253   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:41.930134   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:40.084800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:42.581757   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:44.583611   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.502272   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:43.001471   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.054830   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.754174   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.973037   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.043041   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.119704   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:43.119805   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.620541   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.120849   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.139382   62207 api_server.go:72] duration metric: took 1.019679094s to wait for apiserver process to appear ...
	I0914 18:09:44.139406   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:44.139424   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:44.139876   62207 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0914 18:09:44.639981   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.262096   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.262132   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.262151   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.280626   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.280652   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.640152   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.646640   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:47.646676   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.140256   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.145520   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:48.145557   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.640147   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.645032   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:09:48.654567   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:48.654600   62207 api_server.go:131] duration metric: took 4.515188826s to wait for apiserver health ...
	I0914 18:09:48.654609   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:48.654615   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:48.656828   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:47.082431   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:49.582001   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.500938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:48.002332   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.658151   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:48.692232   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:48.734461   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:48.746689   62207 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:48.746723   62207 system_pods.go:61] "coredns-7c65d6cfc9-mwhvh" [38800077-a7ff-4c8c-8375-4efac2ae40b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:48.746733   62207 system_pods.go:61] "etcd-no-preload-168587" [bdb166bb-8c07-448c-a97c-2146e84f139b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:48.746744   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [8ad59d56-cb86-4028-bf16-3733eb32ad8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:48.746752   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [fd66d0aa-cc35-4330-aa6b-571dbeaa6490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:48.746761   62207 system_pods.go:61] "kube-proxy-lvp9h" [75c154d8-c76d-49eb-9497-dd17199e9d20] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:09:48.746771   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [858c948b-9025-48ab-907a-5b69aefbb24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:48.746782   62207 system_pods.go:61] "metrics-server-6867b74b74-n276z" [69e25ed4-dc8e-4c68-955e-e7226d066ac4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:48.746790   62207 system_pods.go:61] "storage-provisioner" [41c92694-2d3a-4025-8e28-ddea7b9b9c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:09:48.746801   62207 system_pods.go:74] duration metric: took 12.315296ms to wait for pod list to return data ...
	I0914 18:09:48.746811   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:48.751399   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:48.751428   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:48.751440   62207 node_conditions.go:105] duration metric: took 4.625335ms to run NodePressure ...
	I0914 18:09:48.751461   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:49.051211   62207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057333   62207 kubeadm.go:739] kubelet initialised
	I0914 18:09:49.057366   62207 kubeadm.go:740] duration metric: took 6.124032ms waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057379   62207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:49.062570   62207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:51.069219   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:51.588043   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:54.082931   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.499759   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:52.502450   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.000767   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.069338   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:53.570290   62207 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:53.570323   62207 pod_ready.go:82] duration metric: took 4.507716999s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:53.570333   62207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:55.577317   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:56.581937   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:58.583632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:57.000913   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.001429   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:57.578803   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.576525   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:59.576547   62207 pod_ready.go:82] duration metric: took 6.006207927s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:59.576556   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084027   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.084054   62207 pod_ready.go:82] duration metric: took 507.490867ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084067   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089044   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.089068   62207 pod_ready.go:82] duration metric: took 4.991847ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089079   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093160   62207 pod_ready.go:93] pod "kube-proxy-lvp9h" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.093179   62207 pod_ready.go:82] duration metric: took 4.093257ms for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093198   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096786   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.096800   62207 pod_ready.go:82] duration metric: took 3.594996ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096807   62207 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:01.082601   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:03.581290   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:01.502864   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.001645   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.103118   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.103451   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.603049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.582900   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.083020   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.500670   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.500752   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:08.604603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.103035   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:10.581739   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:12.582523   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:14.582676   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.000857   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:13.000915   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.001474   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:13.103818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.602921   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.082737   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:19.082771   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.501947   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.000464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:18.102598   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.104053   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:21.085272   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:23.583185   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:22.001396   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.500424   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:22.602815   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.603230   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.604416   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.082291   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:28.582007   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.501088   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:29.001400   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:28.604725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.103833   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:30.582800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.082884   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.001447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.501525   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:33.603121   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.604573   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.083689   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:37.583721   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:36.000829   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:38.001022   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.002742   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:38.102910   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.103826   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.082907   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.083587   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:44.582457   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.500851   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.001069   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.603017   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.104603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.082010   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:49.082648   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.500917   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.001192   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:47.603610   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.104612   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.083246   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:53.582120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:52.001273   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:54.001308   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:52.603604   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.104734   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.582258   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.081905   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:56.002210   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.499961   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.604025   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.104193   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.083208   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.582299   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.583803   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.500596   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.501405   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.501527   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:02.602818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:05.103061   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:07.083161   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.585702   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.999885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.001611   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:07.603634   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.605513   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.082145   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.082616   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:11.500951   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.001238   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:12.103165   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.103338   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.103440   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.083173   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.582225   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.001344   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.501001   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:18.603529   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.607347   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.583176   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.082414   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.501464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.000161   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.000597   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:23.103266   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.104091   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.082804   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.582345   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.582482   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.001813   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.500751   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.105655   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.602893   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:32.082229   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.082649   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:31.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.001997   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:32.103194   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.103615   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.603139   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.083120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.582197   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.500663   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.501005   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:38.603823   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:41.102313   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.583132   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.082387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.501996   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.000447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:43.103756   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.082762   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.582353   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:49.584026   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.500451   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.501831   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.001778   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:47.605117   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.103023   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.082751   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.582016   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.501977   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.000564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.603110   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.603267   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:56.582121   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:58.583277   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:57.001624   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.501328   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:57.102934   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.103345   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.604125   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.083045   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:03.582885   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.501890   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:04.001023   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.103388   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.103725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.083003   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.581415   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.002052   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.500393   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:08.602880   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.604231   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.582647   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.082818   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.500953   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.001310   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.103254   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.103641   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.083238   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.582387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.501029   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.505028   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.001929   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:17.103996   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:19.603109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:21.603150   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.083547   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.586009   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.002367   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:24.500394   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:23.603351   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:26.103668   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.082824   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.582534   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.001617   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:29.002224   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:28.104044   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.602717   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.083027   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:32.083454   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:34.582698   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.002459   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:33.500924   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.603673   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.103528   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:37.081632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.082269   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.501391   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:38.000592   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:40.001479   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:37.602537   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.603422   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.082861   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:43.583680   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:42.002259   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.500927   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:42.103521   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.603744   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:46.082955   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:48.576914   62554 pod_ready.go:82] duration metric: took 4m0.000963266s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	E0914 18:12:48.576953   62554 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:12:48.576972   62554 pod_ready.go:39] duration metric: took 4m11.061091965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:12:48.576996   62554 kubeadm.go:597] duration metric: took 4m18.578277603s to restartPrimaryControlPlane
	W0914 18:12:48.577052   62554 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:48.577082   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:46.501278   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.001649   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.103834   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.104052   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.603479   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.500469   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:53.500885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.604109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.104639   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:55.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:57.501413   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:00.000020   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:12:58.604461   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:01.102988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:02.001195   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:04.001938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:03.603700   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.103294   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.501564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:09.002049   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:08.604408   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:11.103401   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:14.788734   62554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.2116254s)
	I0914 18:13:14.788816   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:14.810488   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:13:14.827773   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:13:14.846933   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:13:14.846958   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:13:14.847011   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:13:14.859886   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:13:14.859954   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:13:14.882400   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:13:14.896700   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:13:14.896779   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:13:14.908567   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.920718   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:13:14.920791   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.930849   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:13:14.940757   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:13:14.940829   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:13:14.950828   62554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:13:15.000219   62554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:13:15.000292   62554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:13:15.116662   62554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:13:15.116830   62554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:13:15.116937   62554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:13:15.128493   62554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:13:11.002219   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:13.500397   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.130231   62554 out.go:235]   - Generating certificates and keys ...
	I0914 18:13:15.130322   62554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:13:15.130412   62554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:13:15.130513   62554 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:13:15.130642   62554 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:13:15.130762   62554 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:13:15.130842   62554 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:13:15.130927   62554 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:13:15.131020   62554 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:13:15.131131   62554 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:13:15.131235   62554 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:13:15.131325   62554 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:13:15.131417   62554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:13:15.454691   62554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:13:15.653046   62554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:13:15.704029   62554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:13:15.846280   62554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:13:15.926881   62554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:13:15.927633   62554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:13:15.932596   62554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:13:13.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.603335   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.934499   62554 out.go:235]   - Booting up control plane ...
	I0914 18:13:15.934626   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:13:15.934761   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:13:15.934913   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:13:15.952982   62554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:13:15.961449   62554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:13:15.961526   62554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:13:16.102126   62554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:13:16.102335   62554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:13:16.604217   62554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.082287ms
	I0914 18:13:16.604330   62554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:13:15.501231   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:17.501427   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:19.501641   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.609408   62554 kubeadm.go:310] [api-check] The API server is healthy after 5.002255971s
	I0914 18:13:21.622798   62554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:13:21.637103   62554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:13:21.676498   62554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:13:21.676739   62554 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-044534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:13:21.697522   62554 kubeadm.go:310] [bootstrap-token] Using token: oo4rrp.xx4py1wjxiu1i6la
	I0914 18:13:17.604060   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:20.103115   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.699311   62554 out.go:235]   - Configuring RBAC rules ...
	I0914 18:13:21.699462   62554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:13:21.711614   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:13:21.721449   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:13:21.727812   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:13:21.733486   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:13:21.747521   62554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:13:22.014670   62554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:13:22.463865   62554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:13:23.016165   62554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:13:23.016195   62554 kubeadm.go:310] 
	I0914 18:13:23.016257   62554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:13:23.016265   62554 kubeadm.go:310] 
	I0914 18:13:23.016385   62554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:13:23.016415   62554 kubeadm.go:310] 
	I0914 18:13:23.016456   62554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:13:23.016542   62554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:13:23.016627   62554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:13:23.016637   62554 kubeadm.go:310] 
	I0914 18:13:23.016753   62554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:13:23.016778   62554 kubeadm.go:310] 
	I0914 18:13:23.016850   62554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:13:23.016860   62554 kubeadm.go:310] 
	I0914 18:13:23.016937   62554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:13:23.017051   62554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:13:23.017142   62554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:13:23.017156   62554 kubeadm.go:310] 
	I0914 18:13:23.017284   62554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:13:23.017403   62554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:13:23.017419   62554 kubeadm.go:310] 
	I0914 18:13:23.017533   62554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.017664   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:13:23.017700   62554 kubeadm.go:310] 	--control-plane 
	I0914 18:13:23.017710   62554 kubeadm.go:310] 
	I0914 18:13:23.017821   62554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:13:23.017832   62554 kubeadm.go:310] 
	I0914 18:13:23.017944   62554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.018104   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:13:23.019098   62554 kubeadm.go:310] W0914 18:13:14.968906    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019512   62554 kubeadm.go:310] W0914 18:13:14.970621    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019672   62554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:13:23.019690   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:13:23.019704   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:13:23.021459   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:13:23.022517   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:13:23.037352   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:13:23.062037   62554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:13:23.062132   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.062202   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044534 minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=embed-certs-044534 minikube.k8s.io/primary=true
	I0914 18:13:23.089789   62554 ops.go:34] apiserver oom_adj: -16
	I0914 18:13:23.246478   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.747419   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.247388   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.746913   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:21.502222   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.001757   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:25.247445   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:25.747417   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.247440   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.747262   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.847454   62554 kubeadm.go:1113] duration metric: took 3.78538549s to wait for elevateKubeSystemPrivileges
	I0914 18:13:26.847496   62554 kubeadm.go:394] duration metric: took 4m56.896825398s to StartCluster
	I0914 18:13:26.847521   62554 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.847618   62554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:13:26.850148   62554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.850488   62554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:13:26.850562   62554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:13:26.850672   62554 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-044534"
	I0914 18:13:26.850690   62554 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-044534"
	W0914 18:13:26.850703   62554 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:13:26.850715   62554 addons.go:69] Setting default-storageclass=true in profile "embed-certs-044534"
	I0914 18:13:26.850734   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.850753   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:13:26.850752   62554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044534"
	I0914 18:13:26.850716   62554 addons.go:69] Setting metrics-server=true in profile "embed-certs-044534"
	I0914 18:13:26.850844   62554 addons.go:234] Setting addon metrics-server=true in "embed-certs-044534"
	W0914 18:13:26.850860   62554 addons.go:243] addon metrics-server should already be in state true
	I0914 18:13:26.850898   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.851174   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851204   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851214   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851235   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851250   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851273   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.852030   62554 out.go:177] * Verifying Kubernetes components...
	I0914 18:13:26.853580   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:13:26.868084   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0914 18:13:26.868135   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0914 18:13:26.868700   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.868787   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.869251   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869282   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.869637   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.869650   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869714   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.870039   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.870232   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.870396   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.870454   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.871718   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0914 18:13:26.872337   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.872842   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.872870   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.873227   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.873942   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.873989   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.874235   62554 addons.go:234] Setting addon default-storageclass=true in "embed-certs-044534"
	W0914 18:13:26.874257   62554 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:13:26.874287   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.874674   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.874721   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.887685   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0914 18:13:26.888211   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.888735   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.888753   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.889060   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.889233   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.891040   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.892012   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0914 18:13:26.892352   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.892798   62554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:13:26.892812   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.892845   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.893321   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.893987   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.894040   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.894059   62554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:26.894078   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:13:26.894102   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.897218   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0914 18:13:26.897776   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.897932   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.898631   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.898669   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.899315   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.899382   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.899395   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.899557   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.899698   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.899873   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.900433   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.900668   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.902863   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.904569   62554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:13:22.104620   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.603793   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.604247   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.905708   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:13:26.905729   62554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:13:26.905755   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.910848   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911333   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.911430   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911568   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.911840   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.912025   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.912238   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.912625   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0914 18:13:26.913014   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.913653   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.913668   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.914116   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.914342   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.916119   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.916332   62554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:26.916350   62554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:13:26.916369   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.920129   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920769   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.920791   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920971   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.921170   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.921291   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.921413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:27.055184   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:13:27.072683   62554 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084289   62554 node_ready.go:49] node "embed-certs-044534" has status "Ready":"True"
	I0914 18:13:27.084317   62554 node_ready.go:38] duration metric: took 11.599354ms for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084326   62554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:27.090428   62554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:27.258854   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:27.260576   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:27.261092   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:13:27.261115   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:13:27.332882   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:13:27.332914   62554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:13:27.400159   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:27.400193   62554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:13:27.486731   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:28.164139   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164171   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164215   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164242   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164581   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164593   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164596   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164597   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164608   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164569   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164619   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164621   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164627   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164629   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164874   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164897   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164902   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164929   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164941   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196171   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.196197   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.196530   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.196590   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.509915   62554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023114908s)
	I0914 18:13:28.509973   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.509989   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510276   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510329   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510348   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510365   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.510374   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510614   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510653   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510665   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510678   62554 addons.go:475] Verifying addon metrics-server=true in "embed-certs-044534"
	I0914 18:13:28.512283   62554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:13:28.513593   62554 addons.go:510] duration metric: took 1.663035459s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:13:29.103964   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.501135   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.502181   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.605176   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.102817   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.596452   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:33.596699   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.001070   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:32.001946   63448 pod_ready.go:82] duration metric: took 4m0.00767403s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:13:32.001975   63448 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:13:32.001987   63448 pod_ready.go:39] duration metric: took 4m5.051544016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:32.002004   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:32.002037   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:32.002093   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:32.053241   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.053276   63448 cri.go:89] found id: ""
	I0914 18:13:32.053287   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:32.053349   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.057854   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:32.057921   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:32.099294   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:32.099318   63448 cri.go:89] found id: ""
	I0914 18:13:32.099328   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:32.099375   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.103674   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:32.103745   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:32.144190   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:32.144219   63448 cri.go:89] found id: ""
	I0914 18:13:32.144228   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:32.144275   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.148382   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:32.148443   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:32.185779   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:32.185807   63448 cri.go:89] found id: ""
	I0914 18:13:32.185814   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:32.185864   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.189478   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:32.189545   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:32.224657   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.224681   63448 cri.go:89] found id: ""
	I0914 18:13:32.224690   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:32.224745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.228421   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:32.228494   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:32.262491   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:32.262513   63448 cri.go:89] found id: ""
	I0914 18:13:32.262519   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:32.262579   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.266135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:32.266213   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:32.300085   63448 cri.go:89] found id: ""
	I0914 18:13:32.300111   63448 logs.go:276] 0 containers: []
	W0914 18:13:32.300119   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:32.300124   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:32.300181   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:32.335359   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:32.335379   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.335387   63448 cri.go:89] found id: ""
	I0914 18:13:32.335393   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:32.335451   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.339404   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.343173   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:32.343203   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.378987   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:32.379016   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.418829   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:32.418855   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:32.941046   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:32.941102   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.998148   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:32.998209   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:33.041208   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:33.041241   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:33.080774   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:33.080806   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:33.130519   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:33.130552   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:33.182751   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:33.182788   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:33.222008   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:33.222053   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:33.263100   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:33.263137   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:33.330307   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:33.330343   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:33.344658   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:33.344687   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:35.597157   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:35.597179   62554 pod_ready.go:82] duration metric: took 8.50672651s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:35.597189   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604147   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.604179   62554 pod_ready.go:82] duration metric: took 1.006982094s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604192   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610278   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.610302   62554 pod_ready.go:82] duration metric: took 6.101843ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610315   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615527   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.615549   62554 pod_ready.go:82] duration metric: took 5.226206ms for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615559   62554 pod_ready.go:39] duration metric: took 9.531222215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:36.615587   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:36.615642   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.630381   62554 api_server.go:72] duration metric: took 9.779851335s to wait for apiserver process to appear ...
	I0914 18:13:36.630414   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.630438   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:13:36.637559   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:13:36.639973   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:36.639999   62554 api_server.go:131] duration metric: took 9.577574ms to wait for apiserver health ...
	I0914 18:13:36.640006   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:36.647412   62554 system_pods.go:59] 9 kube-system pods found
	I0914 18:13:36.647443   62554 system_pods.go:61] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.647448   62554 system_pods.go:61] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.647452   62554 system_pods.go:61] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.647456   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.647459   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.647463   62554 system_pods.go:61] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.647465   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.647471   62554 system_pods.go:61] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.647475   62554 system_pods.go:61] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.647483   62554 system_pods.go:74] duration metric: took 7.47115ms to wait for pod list to return data ...
	I0914 18:13:36.647490   62554 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:36.650678   62554 default_sa.go:45] found service account: "default"
	I0914 18:13:36.650722   62554 default_sa.go:55] duration metric: took 3.225438ms for default service account to be created ...
	I0914 18:13:36.650733   62554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:36.656461   62554 system_pods.go:86] 9 kube-system pods found
	I0914 18:13:36.656489   62554 system_pods.go:89] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.656495   62554 system_pods.go:89] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.656499   62554 system_pods.go:89] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.656503   62554 system_pods.go:89] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.656507   62554 system_pods.go:89] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.656512   62554 system_pods.go:89] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.656516   62554 system_pods.go:89] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.656522   62554 system_pods.go:89] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.656525   62554 system_pods.go:89] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.656534   62554 system_pods.go:126] duration metric: took 5.795433ms to wait for k8s-apps to be running ...
	I0914 18:13:36.656541   62554 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:36.656586   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:36.673166   62554 system_svc.go:56] duration metric: took 16.609444ms WaitForService to wait for kubelet
	I0914 18:13:36.673205   62554 kubeadm.go:582] duration metric: took 9.822681909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:36.673227   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:36.794984   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:36.795013   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:36.795024   62554 node_conditions.go:105] duration metric: took 121.79122ms to run NodePressure ...
	I0914 18:13:36.795038   62554 start.go:241] waiting for startup goroutines ...
	I0914 18:13:36.795047   62554 start.go:246] waiting for cluster config update ...
	I0914 18:13:36.795060   62554 start.go:255] writing updated cluster config ...
	I0914 18:13:36.795406   62554 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:36.847454   62554 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:36.849605   62554 out.go:177] * Done! kubectl is now configured to use "embed-certs-044534" cluster and "default" namespace by default
	I0914 18:13:33.105197   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.604458   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.989800   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.006371   63448 api_server.go:72] duration metric: took 4m14.310539233s to wait for apiserver process to appear ...
	I0914 18:13:36.006405   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.006446   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:36.006508   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:36.044973   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:36.044992   63448 cri.go:89] found id: ""
	I0914 18:13:36.045000   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:36.045055   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.049371   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:36.049449   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:36.097114   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.097139   63448 cri.go:89] found id: ""
	I0914 18:13:36.097148   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:36.097212   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.102084   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:36.102153   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:36.140640   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.140662   63448 cri.go:89] found id: ""
	I0914 18:13:36.140671   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:36.140728   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.144624   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:36.144696   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:36.179135   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.179156   63448 cri.go:89] found id: ""
	I0914 18:13:36.179163   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:36.179216   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.183050   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:36.183110   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:36.222739   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:36.222758   63448 cri.go:89] found id: ""
	I0914 18:13:36.222765   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:36.222812   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.226715   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:36.226782   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:36.261587   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:36.261610   63448 cri.go:89] found id: ""
	I0914 18:13:36.261617   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:36.261664   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.265541   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:36.265614   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:36.301521   63448 cri.go:89] found id: ""
	I0914 18:13:36.301546   63448 logs.go:276] 0 containers: []
	W0914 18:13:36.301554   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:36.301560   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:36.301622   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:36.335332   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.335355   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.335358   63448 cri.go:89] found id: ""
	I0914 18:13:36.335365   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:36.335415   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.339542   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.343543   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:36.343570   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.384224   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:36.384259   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.428010   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:36.428041   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.469679   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:36.469708   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.507570   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:36.507597   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.543300   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:36.543335   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:36.619060   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:36.619084   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:36.633542   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:36.633572   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:36.741334   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:36.741370   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:37.231208   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:37.231255   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:37.278835   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:37.278863   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:37.320359   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:37.320399   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:37.357940   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:37.357974   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:39.913586   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:13:39.917590   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:13:39.918633   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:39.918653   63448 api_server.go:131] duration metric: took 3.912241678s to wait for apiserver health ...
	I0914 18:13:39.918660   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:39.918682   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:39.918727   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:39.961919   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:39.961947   63448 cri.go:89] found id: ""
	I0914 18:13:39.961956   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:39.962012   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:39.965756   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:39.965838   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:40.008044   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.008066   63448 cri.go:89] found id: ""
	I0914 18:13:40.008074   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:40.008117   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.012505   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:40.012569   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:40.059166   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.059194   63448 cri.go:89] found id: ""
	I0914 18:13:40.059204   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:40.059267   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.063135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:40.063197   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:40.105220   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.105245   63448 cri.go:89] found id: ""
	I0914 18:13:40.105255   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:40.105308   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.109907   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:40.109978   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:40.146307   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.146337   63448 cri.go:89] found id: ""
	I0914 18:13:40.146349   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:40.146396   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.150369   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:40.150436   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:40.185274   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.185301   63448 cri.go:89] found id: ""
	I0914 18:13:40.185312   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:40.185374   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.189425   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:40.189499   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:40.223289   63448 cri.go:89] found id: ""
	I0914 18:13:40.223311   63448 logs.go:276] 0 containers: []
	W0914 18:13:40.223319   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:40.223324   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:40.223369   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:40.257779   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.257805   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.257811   63448 cri.go:89] found id: ""
	I0914 18:13:40.257820   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:40.257880   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.262388   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.266233   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:40.266258   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:38.105234   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.604049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.310145   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:40.310188   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.358651   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:40.358686   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.398107   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:40.398144   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.450540   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:40.450573   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:40.465987   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:40.466013   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:40.573299   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:40.573333   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.618201   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:40.618247   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.671259   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:40.671304   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.708455   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:40.708488   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.746662   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:40.746696   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:41.108968   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:41.109017   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:41.150925   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:41.150968   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:43.725606   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:13:43.725642   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.725650   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.725656   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.725661   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.725665   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.725670   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.725680   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.725687   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.725699   63448 system_pods.go:74] duration metric: took 3.807031642s to wait for pod list to return data ...
	I0914 18:13:43.725710   63448 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:43.728384   63448 default_sa.go:45] found service account: "default"
	I0914 18:13:43.728409   63448 default_sa.go:55] duration metric: took 2.691817ms for default service account to be created ...
	I0914 18:13:43.728417   63448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:43.732884   63448 system_pods.go:86] 8 kube-system pods found
	I0914 18:13:43.732913   63448 system_pods.go:89] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.732918   63448 system_pods.go:89] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.732922   63448 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.732926   63448 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.732931   63448 system_pods.go:89] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.732935   63448 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.732942   63448 system_pods.go:89] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.732947   63448 system_pods.go:89] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.732954   63448 system_pods.go:126] duration metric: took 4.531761ms to wait for k8s-apps to be running ...
	I0914 18:13:43.732960   63448 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:43.733001   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:43.749535   63448 system_svc.go:56] duration metric: took 16.566498ms WaitForService to wait for kubelet
	I0914 18:13:43.749567   63448 kubeadm.go:582] duration metric: took 4m22.053742257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:43.749587   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:43.752493   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:43.752514   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:43.752523   63448 node_conditions.go:105] duration metric: took 2.931821ms to run NodePressure ...
	I0914 18:13:43.752534   63448 start.go:241] waiting for startup goroutines ...
	I0914 18:13:43.752548   63448 start.go:246] waiting for cluster config update ...
	I0914 18:13:43.752560   63448 start.go:255] writing updated cluster config ...
	I0914 18:13:43.752815   63448 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:43.803181   63448 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:43.805150   63448 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-243449" cluster and "default" namespace by default
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.103780   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:45.603666   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:47.603988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:50.104811   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:52.604411   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:55.103339   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:57.103716   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:59.603423   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:00.097180   62207 pod_ready.go:82] duration metric: took 4m0.000345486s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	E0914 18:14:00.097209   62207 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:14:00.097230   62207 pod_ready.go:39] duration metric: took 4m11.039838973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:00.097260   62207 kubeadm.go:597] duration metric: took 4m18.345876583s to restartPrimaryControlPlane
	W0914 18:14:00.097328   62207 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:14:00.097360   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:26.392001   62207 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.294613232s)
	I0914 18:14:26.392082   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:26.410558   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:14:26.421178   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:26.430786   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:26.430808   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:26.430858   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:26.440193   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:26.440253   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:26.449848   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:26.459589   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:26.459651   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:26.469556   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.478722   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:26.478782   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.488694   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:26.498478   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:26.498542   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:26.509455   62207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:26.552295   62207 kubeadm.go:310] W0914 18:14:26.530603    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.552908   62207 kubeadm.go:310] W0914 18:14:26.531307    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.665962   62207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:35.406074   62207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:14:35.406150   62207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:35.406251   62207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:35.406372   62207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:35.406503   62207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:14:35.406611   62207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:35.408167   62207 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:35.408257   62207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:35.408337   62207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:35.408451   62207 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:35.408550   62207 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:35.408655   62207 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:35.408733   62207 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:35.408823   62207 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:35.408916   62207 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:35.409022   62207 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:35.409133   62207 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:35.409176   62207 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:35.409225   62207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:35.409269   62207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:35.409328   62207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:14:35.409374   62207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:35.409440   62207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:35.409507   62207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:35.409633   62207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:35.409734   62207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:35.411984   62207 out.go:235]   - Booting up control plane ...
	I0914 18:14:35.412099   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:35.412212   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:35.412276   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:35.412371   62207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:35.412444   62207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:35.412479   62207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:35.412597   62207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:14:35.412686   62207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:14:35.412737   62207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002422188s
	I0914 18:14:35.412801   62207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:14:35.412863   62207 kubeadm.go:310] [api-check] The API server is healthy after 5.002046359s
	I0914 18:14:35.412986   62207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:14:35.413129   62207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:14:35.413208   62207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:14:35.413427   62207 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-168587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:14:35.413510   62207 kubeadm.go:310] [bootstrap-token] Using token: 2jk8ol.l80z6l7tm2nt4pl7
	I0914 18:14:35.414838   62207 out.go:235]   - Configuring RBAC rules ...
	I0914 18:14:35.414968   62207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:14:35.415069   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:14:35.415291   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:14:35.415482   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:14:35.415615   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:14:35.415725   62207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:14:35.415867   62207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:14:35.415930   62207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:14:35.415990   62207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:14:35.415999   62207 kubeadm.go:310] 
	I0914 18:14:35.416077   62207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:14:35.416086   62207 kubeadm.go:310] 
	I0914 18:14:35.416187   62207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:14:35.416198   62207 kubeadm.go:310] 
	I0914 18:14:35.416232   62207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:14:35.416314   62207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:14:35.416388   62207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:14:35.416397   62207 kubeadm.go:310] 
	I0914 18:14:35.416474   62207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:14:35.416484   62207 kubeadm.go:310] 
	I0914 18:14:35.416525   62207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:14:35.416529   62207 kubeadm.go:310] 
	I0914 18:14:35.416597   62207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:14:35.416701   62207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:14:35.416781   62207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:14:35.416796   62207 kubeadm.go:310] 
	I0914 18:14:35.416899   62207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:14:35.416998   62207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:14:35.417007   62207 kubeadm.go:310] 
	I0914 18:14:35.417125   62207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417247   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:14:35.417272   62207 kubeadm.go:310] 	--control-plane 
	I0914 18:14:35.417276   62207 kubeadm.go:310] 
	I0914 18:14:35.417399   62207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:14:35.417422   62207 kubeadm.go:310] 
	I0914 18:14:35.417530   62207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417686   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:14:35.417705   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:14:35.417713   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:14:35.420023   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:14:35.421095   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:14:35.432619   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:14:35.451720   62207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:14:35.451790   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:35.451836   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-168587 minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=no-preload-168587 minikube.k8s.io/primary=true
	I0914 18:14:35.654681   62207 ops.go:34] apiserver oom_adj: -16
	I0914 18:14:35.654714   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.155376   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.655468   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.155741   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.655416   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.154935   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.655465   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.740860   62207 kubeadm.go:1113] duration metric: took 3.289121705s to wait for elevateKubeSystemPrivileges
	I0914 18:14:38.740912   62207 kubeadm.go:394] duration metric: took 4m57.036377829s to StartCluster
	I0914 18:14:38.740939   62207 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.741029   62207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:14:38.742754   62207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.742977   62207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:14:38.743138   62207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:14:38.743260   62207 addons.go:69] Setting storage-provisioner=true in profile "no-preload-168587"
	I0914 18:14:38.743271   62207 addons.go:69] Setting default-storageclass=true in profile "no-preload-168587"
	I0914 18:14:38.743282   62207 addons.go:234] Setting addon storage-provisioner=true in "no-preload-168587"
	I0914 18:14:38.743290   62207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-168587"
	W0914 18:14:38.743295   62207 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:14:38.743306   62207 addons.go:69] Setting metrics-server=true in profile "no-preload-168587"
	I0914 18:14:38.743329   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743334   62207 addons.go:234] Setting addon metrics-server=true in "no-preload-168587"
	I0914 18:14:38.743362   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0914 18:14:38.743365   62207 addons.go:243] addon metrics-server should already be in state true
	I0914 18:14:38.743442   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743814   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743843   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743821   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.744070   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.744427   62207 out.go:177] * Verifying Kubernetes components...
	I0914 18:14:38.745716   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:14:38.760250   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0914 18:14:38.760329   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0914 18:14:38.760788   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.760810   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.761416   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761438   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761581   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761829   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.761980   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.762333   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.762445   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.762495   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.763295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0914 18:14:38.763767   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.764256   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.764285   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.764616   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.765095   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765131   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.765525   62207 addons.go:234] Setting addon default-storageclass=true in "no-preload-168587"
	W0914 18:14:38.765544   62207 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:14:38.765568   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.765798   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765837   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.782208   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0914 18:14:38.782527   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0914 18:14:38.782564   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0914 18:14:38.782679   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782943   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782973   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.783413   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783433   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783566   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783573   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783585   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783956   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.783964   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784444   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.784482   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.784639   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784666   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.784806   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.786340   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.786797   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.788188   62207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:14:38.788195   62207 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:14:38.789239   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:14:38.789254   62207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:14:38.789273   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.789338   62207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:38.789347   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:14:38.789358   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.792968   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793521   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793853   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.793894   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794037   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794097   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.794107   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794258   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794351   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794499   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794531   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794635   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794716   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.794777   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.827254   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 18:14:38.827852   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.828434   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.828460   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.828837   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.829088   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.830820   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.831031   62207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:38.831048   62207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:14:38.831067   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.833822   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834242   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.834282   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834453   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.834641   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.834794   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.834963   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.920627   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:14:38.941951   62207 node_ready.go:35] waiting up to 6m0s for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973102   62207 node_ready.go:49] node "no-preload-168587" has status "Ready":"True"
	I0914 18:14:38.973124   62207 node_ready.go:38] duration metric: took 31.146661ms for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973132   62207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:38.989712   62207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:39.018196   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:14:39.018223   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:14:39.045691   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:39.066249   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:14:39.066277   62207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:14:39.073017   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:39.118360   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.118385   62207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:14:39.195268   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.874924   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.874953   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.874950   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875004   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875398   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875406   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875457   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875466   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875476   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875406   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875430   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875598   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875609   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875631   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875914   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875916   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875934   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875939   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875959   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875966   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.929860   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.929881   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.930191   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.930211   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.139888   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.139918   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140256   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140273   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140282   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.140289   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140608   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140620   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:40.140630   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140646   62207 addons.go:475] Verifying addon metrics-server=true in "no-preload-168587"
	I0914 18:14:40.142461   62207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:14:40.143818   62207 addons.go:510] duration metric: took 1.400695696s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:14:40.996599   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:43.498584   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:45.995938   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:45.995971   62207 pod_ready.go:82] duration metric: took 7.006220602s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:45.995984   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000589   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.000609   62207 pod_ready.go:82] duration metric: took 4.618617ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000620   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004865   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.004886   62207 pod_ready.go:82] duration metric: took 4.259787ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004895   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009225   62207 pod_ready.go:93] pod "kube-proxy-xdj6b" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.009243   62207 pod_ready.go:82] duration metric: took 4.343161ms for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009250   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013312   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.013330   62207 pod_ready.go:82] duration metric: took 4.073817ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013337   62207 pod_ready.go:39] duration metric: took 7.040196066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:46.013358   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:14:46.013403   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:14:46.029881   62207 api_server.go:72] duration metric: took 7.286871398s to wait for apiserver process to appear ...
	I0914 18:14:46.029912   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:14:46.029937   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:14:46.034236   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:14:46.035287   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:14:46.035305   62207 api_server.go:131] duration metric: took 5.385499ms to wait for apiserver health ...
	I0914 18:14:46.035314   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:14:46.196765   62207 system_pods.go:59] 9 kube-system pods found
	I0914 18:14:46.196796   62207 system_pods.go:61] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196804   62207 system_pods.go:61] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196810   62207 system_pods.go:61] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.196816   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.196821   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.196824   62207 system_pods.go:61] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.196827   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.196832   62207 system_pods.go:61] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.196835   62207 system_pods.go:61] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.196842   62207 system_pods.go:74] duration metric: took 161.510322ms to wait for pod list to return data ...
	I0914 18:14:46.196853   62207 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:14:46.394399   62207 default_sa.go:45] found service account: "default"
	I0914 18:14:46.394428   62207 default_sa.go:55] duration metric: took 197.566762ms for default service account to be created ...
	I0914 18:14:46.394443   62207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:14:46.596421   62207 system_pods.go:86] 9 kube-system pods found
	I0914 18:14:46.596454   62207 system_pods.go:89] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596462   62207 system_pods.go:89] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596468   62207 system_pods.go:89] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.596473   62207 system_pods.go:89] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.596477   62207 system_pods.go:89] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.596480   62207 system_pods.go:89] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.596483   62207 system_pods.go:89] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.596502   62207 system_pods.go:89] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.596509   62207 system_pods.go:89] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.596517   62207 system_pods.go:126] duration metric: took 202.067078ms to wait for k8s-apps to be running ...
	I0914 18:14:46.596527   62207 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:14:46.596571   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:46.611796   62207 system_svc.go:56] duration metric: took 15.259464ms WaitForService to wait for kubelet
	I0914 18:14:46.611837   62207 kubeadm.go:582] duration metric: took 7.868833105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:14:46.611858   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:14:46.794731   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:14:46.794758   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:14:46.794767   62207 node_conditions.go:105] duration metric: took 182.903835ms to run NodePressure ...
	I0914 18:14:46.794777   62207 start.go:241] waiting for startup goroutines ...
	I0914 18:14:46.794783   62207 start.go:246] waiting for cluster config update ...
	I0914 18:14:46.794793   62207 start.go:255] writing updated cluster config ...
	I0914 18:14:46.795051   62207 ssh_runner.go:195] Run: rm -f paused
	I0914 18:14:46.845803   62207 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:14:46.847399   62207 out.go:177] * Done! kubectl is now configured to use "no-preload-168587" cluster and "default" namespace by default
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 18:22:45 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:45.984716726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=153a61d0-1359-4cd5-a490-8b6e333cdcf9 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:45 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:45.986790795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4b4dde9-f5fa-4e59-8348-a7a8782af282 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:45 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:45.987327622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338165987298611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4b4dde9-f5fa-4e59-8348-a7a8782af282 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:45 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:45.988049796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=411b971d-8caf-42fa-abfe-a83d8703b10d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:45 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:45.988107310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=411b971d-8caf-42fa-abfe-a83d8703b10d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:45 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:45.988308268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=411b971d-8caf-42fa-abfe-a83d8703b10d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.034713334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8a29f63-7bac-44d6-9ab2-a01eafe9871e name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.034823098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8a29f63-7bac-44d6-9ab2-a01eafe9871e name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.036190168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03aef094-f637-4ff3-9e26-97b71551fdf6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.036950073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338166036918667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03aef094-f637-4ff3-9e26-97b71551fdf6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.037727726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b285c38a-5cf9-4cf1-8a8f-f1c9382a2df7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.038063980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b285c38a-5cf9-4cf1-8a8f-f1c9382a2df7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.042605093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b285c38a-5cf9-4cf1-8a8f-f1c9382a2df7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.053933955Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=577f6751-c91b-4502-9d2e-668b4c5c73df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.054216688Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&PodSandboxMetadata{Name:busybox,Uid:fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337364881426011,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T18:09:16.983257493Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-8v8s7,Uid:896b4fde-d17e-43a3-b7c8-b710e2e70e2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172633
7364880198239,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T18:09:16.983246093Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43d24a32041755d0abc8b39456930b1e371a2652c2dda1ac4bee267bbd238014,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-7v8dr,Uid:90be95af-c779-4b31-b261-2c4020a34280,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337363082033135,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-7v8dr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90be95af-c779-4b31-b261-2c4020a34280,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14
T18:09:16.983258551Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2e814601-a19a-4848-bed5-d9a29ffb3b5d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337357295533696,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T18:09:16.983256228Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&PodSandboxMetadata{Name:kube-proxy-gbkqm,Uid:4308aacf-ea0a-4bba-8598-85ffaf959b7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337357294701674,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598-85ffaf959b7e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-09-14T18:09:16.983253835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-243449,Uid:4de6eba14fda99aaa4a144ae5e6d52ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337352512943118,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.38:2379,kubernetes.io/config.hash: 4de6eba14fda99aaa4a144ae5e6d52ec,kubernetes.io/config.seen: 2024-09-14T18:09:12.004503048Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&PodSandboxMetadata{Name:k
ube-controller-manager-default-k8s-diff-port-243449,Uid:e467e9fb657a0ca4b355d6e3b1e32a7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337352490780340,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e32a7d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e467e9fb657a0ca4b355d6e3b1e32a7d,kubernetes.io/config.seen: 2024-09-14T18:09:11.981302992Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-243449,Uid:c5688fa5732dad3a9738f9b149e2c05f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337352480586517,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c05f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c5688fa5732dad3a9738f9b149e2c05f,kubernetes.io/config.seen: 2024-09-14T18:09:11.981304306Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-243449,Uid:7c181fee58e194ba1e69efe4c4fb4841,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337352474259963,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-ad
dress.endpoint: 192.168.61.38:8444,kubernetes.io/config.hash: 7c181fee58e194ba1e69efe4c4fb4841,kubernetes.io/config.seen: 2024-09-14T18:09:11.981297692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=577f6751-c91b-4502-9d2e-668b4c5c73df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.055438383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db235577-f80b-4e5f-a0c1-d4c3b1b3f202 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.056151832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db235577-f80b-4e5f-a0c1-d4c3b1b3f202 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.057173301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-e
a0a-4bba-8598-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annot
ations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca
4b355d6e3b1e32a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9
738f9b149e2c05f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db235577-f80b-4e5f-a0c1-d4c3b1b3f202 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.080849320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39653828-0d0d-4934-8d22-41209f60e4ca name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.080926520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39653828-0d0d-4934-8d22-41209f60e4ca name=/runtime.v1.RuntimeService/Version
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.081744331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8b18abc-b4f0-4e02-a5d6-7252f2d038a3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.082142757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338166082121319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8b18abc-b4f0-4e02-a5d6-7252f2d038a3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.082587266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1730e7ef-38dd-421a-ab09-ad09a05406f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.082636810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1730e7ef-38dd-421a-ab09-ad09a05406f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:22:46 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:22:46.082867903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1730e7ef-38dd-421a-ab09-ad09a05406f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be0aa9c176141       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   e97ff06204d25       storage-provisioner
	d38bdc2036d51       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5f11f2a596869       busybox
	02a31bf75666c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   65ce16275efd0       coredns-7c65d6cfc9-8v8s7
	a5c3b65e96ba8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   eafcf1e3a2067       kube-proxy-gbkqm
	b33f92ef722c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   e97ff06204d25       storage-provisioner
	09627c963da76       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   2d16ffab3061a       kube-controller-manager-default-k8s-diff-port-243449
	7fb6567a7b9f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   24d93e1abe220       etcd-default-k8s-diff-port-243449
	6c532e45713d0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   662103c157493       kube-apiserver-default-k8s-diff-port-243449
	a390e6c015355       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   f49c613c90573       kube-scheduler-default-k8s-diff-port-243449
	
	
	==> coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49398 - 34282 "HINFO IN 2491328004879093116.776769339588687849. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017624764s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-243449
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-243449
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=default-k8s-diff-port-243449
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_03_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:03:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-243449
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:22:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:19:57 +0000   Sat, 14 Sep 2024 18:03:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:19:57 +0000   Sat, 14 Sep 2024 18:03:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:19:57 +0000   Sat, 14 Sep 2024 18:03:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:19:57 +0000   Sat, 14 Sep 2024 18:09:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.38
	  Hostname:    default-k8s-diff-port-243449
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd101a054f1f4ca78ef4db25ca66f4da
	  System UUID:                fd101a05-4f1f-4ca7-8ef4-db25ca66f4da
	  Boot ID:                    12942388-bced-4bfe-8a04-b38a566e7b58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-8v8s7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-default-k8s-diff-port-243449                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-243449             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-243449    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-gbkqm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-243449             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-6867b74b74-7v8dr                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-243449 event: Registered Node default-k8s-diff-port-243449 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-243449 event: Registered Node default-k8s-diff-port-243449 in Controller
	
	
	==> dmesg <==
	[Sep14 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055344] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044178] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.979298] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.015370] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.349940] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep14 18:09] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.135128] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.181018] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.132870] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.302572] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +4.117874] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +2.013271] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.068264] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.536559] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.405537] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +1.369927] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.557021] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] <==
	{"level":"info","ts":"2024-09-14T18:09:14.273157Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:09:14.285209Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T18:09:14.285258Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.38:2380"}
	{"level":"info","ts":"2024-09-14T18:09:14.285365Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.38:2380"}
	{"level":"info","ts":"2024-09-14T18:09:14.285725Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a85cda6b4b3fcaa2","initial-advertise-peer-urls":["https://192.168.61.38:2380"],"listen-peer-urls":["https://192.168.61.38:2380"],"advertise-client-urls":["https://192.168.61.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T18:09:14.285765Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T18:09:15.171558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-14T18:09:15.171633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-14T18:09:15.171664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 received MsgPreVoteResp from a85cda6b4b3fcaa2 at term 2"}
	{"level":"info","ts":"2024-09-14T18:09:15.171683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became candidate at term 3"}
	{"level":"info","ts":"2024-09-14T18:09:15.171691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 received MsgVoteResp from a85cda6b4b3fcaa2 at term 3"}
	{"level":"info","ts":"2024-09-14T18:09:15.171703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a85cda6b4b3fcaa2 became leader at term 3"}
	{"level":"info","ts":"2024-09-14T18:09:15.171714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a85cda6b4b3fcaa2 elected leader a85cda6b4b3fcaa2 at term 3"}
	{"level":"info","ts":"2024-09-14T18:09:15.174389Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a85cda6b4b3fcaa2","local-member-attributes":"{Name:default-k8s-diff-port-243449 ClientURLs:[https://192.168.61.38:2379]}","request-path":"/0/members/a85cda6b4b3fcaa2/attributes","cluster-id":"ac52cafbc0494bf3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T18:09:15.174598Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:09:15.174688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T18:09:15.174702Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T18:09:15.174760Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:09:15.176120Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:09:15.177382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.38:2379"}
	{"level":"info","ts":"2024-09-14T18:09:15.176140Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:09:15.178139Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T18:19:15.216882Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":853}
	{"level":"info","ts":"2024-09-14T18:19:15.226311Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":853,"took":"8.850309ms","hash":1517727158,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-14T18:19:15.226463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1517727158,"revision":853,"compact-revision":-1}
	
	
	==> kernel <==
	 18:22:46 up 13 min,  0 users,  load average: 0.17, 0.19, 0.17
	Linux default-k8s-diff-port-243449 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] <==
	W0914 18:19:17.464903       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:19:17.464983       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:19:17.466149       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:19:17.466252       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:20:17.467422       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:20:17.467487       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 18:20:17.467429       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:20:17.467562       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:20:17.468761       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:20:17.468818       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:22:17.469213       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:22:17.469582       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 18:22:17.469252       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:22:17.469790       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:22:17.470888       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:22:17.470928       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] <==
	E0914 18:17:22.135385       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:17:22.547947       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:17:52.141078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:17:52.555470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:18:22.147331       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:18:22.564939       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:18:52.153270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:18:52.573244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:19:22.158980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:19:22.583975       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:19:52.165294       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:19:52.591741       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:19:57.950420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-243449"
	E0914 18:20:22.171852       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:20:22.599057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:20:24.070682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="215.735µs"
	I0914 18:20:39.069503       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="311.993µs"
	E0914 18:20:52.177773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:20:52.609577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:21:22.184095       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:21:22.618156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:21:52.190497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:21:52.625827       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:22:22.197884       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:22:22.632719       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 18:09:17.739938       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 18:09:17.749668       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.38"]
	E0914 18:09:17.749828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 18:09:17.781011       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 18:09:17.781041       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 18:09:17.781064       1 server_linux.go:169] "Using iptables Proxier"
	I0914 18:09:17.783295       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 18:09:17.783654       1 server.go:483] "Version info" version="v1.31.1"
	I0914 18:09:17.783703       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:09:17.785052       1 config.go:199] "Starting service config controller"
	I0914 18:09:17.785106       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 18:09:17.785168       1 config.go:105] "Starting endpoint slice config controller"
	I0914 18:09:17.785187       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 18:09:17.785776       1 config.go:328] "Starting node config controller"
	I0914 18:09:17.785918       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 18:09:17.885916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 18:09:17.885952       1 shared_informer.go:320] Caches are synced for service config
	I0914 18:09:17.885966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] <==
	I0914 18:09:14.630654       1 serving.go:386] Generated self-signed cert in-memory
	W0914 18:09:16.426610       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 18:09:16.426710       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 18:09:16.426746       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 18:09:16.426817       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 18:09:16.491673       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 18:09:16.491723       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:09:16.499152       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 18:09:16.499320       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 18:09:16.502992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:09:16.502407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 18:09:16.607208       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 18:21:34 default-k8s-diff-port-243449 kubelet[902]: E0914 18:21:34.054396     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:21:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:21:42.248179     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338102247743969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:21:42.248526     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338102247743969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:45 default-k8s-diff-port-243449 kubelet[902]: E0914 18:21:45.053698     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:21:52 default-k8s-diff-port-243449 kubelet[902]: E0914 18:21:52.250487     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338112250090615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:21:52 default-k8s-diff-port-243449 kubelet[902]: E0914 18:21:52.250526     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338112250090615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:00 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:00.055496     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:22:02 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:02.252413     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338122251908226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:02 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:02.252460     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338122251908226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:11 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:11.053925     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:12.067436     902 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:12.254638     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338132254269022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:12 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:12.254685     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338132254269022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:22 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:22.256965     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338142256507680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:22 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:22.257012     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338142256507680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:23 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:23.055112     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:22:32 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:32.260242     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338152259632207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:32 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:32.260303     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338152259632207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:38 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:38.055600     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:22:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:42.262247     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338162261784200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:22:42.262287     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338162261784200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] <==
	I0914 18:09:17.674762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 18:09:47.682923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] <==
	I0914 18:09:48.388316       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:09:48.403689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:09:48.405275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:10:05.807561       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:10:05.807905       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-243449_69ac6bff-7150-461e-8193-24eb67d1af3a!
	I0914 18:10:05.810676       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66f83808-3ad1-43c7-89ed-fe5345d634d8", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-243449_69ac6bff-7150-461e-8193-24eb67d1af3a became leader
	I0914 18:10:05.910464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-243449_69ac6bff-7150-461e-8193-24eb67d1af3a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7v8dr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 describe pod metrics-server-6867b74b74-7v8dr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-243449 describe pod metrics-server-6867b74b74-7v8dr: exit status 1 (73.043394ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7v8dr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-243449 describe pod metrics-server-6867b74b74-7v8dr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0914 18:15:28.017247   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:16:45.625942   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-168587 -n no-preload-168587
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-14 18:23:47.374101186 +0000 UTC m=+6001.895835171
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-168587 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-168587 logs -n 25: (2.252712632s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-319416                              | stopped-upgrade-319416       | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:06:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:06:40.299903   63448 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:40.300039   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300049   63448 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:40.300054   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300240   63448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:06:40.300801   63448 out.go:352] Setting JSON to false
	I0914 18:06:40.301779   63448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6544,"bootTime":1726330656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:06:40.301879   63448 start.go:139] virtualization: kvm guest
	I0914 18:06:40.303963   63448 out.go:177] * [default-k8s-diff-port-243449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:06:40.305394   63448 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:06:40.305429   63448 notify.go:220] Checking for updates...
	I0914 18:06:40.308148   63448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:06:40.309226   63448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:06:40.310360   63448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:06:40.311509   63448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:06:40.312543   63448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:06:40.314418   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:06:40.315063   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.315154   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.330033   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0914 18:06:40.330502   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.331014   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.331035   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.331372   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.331519   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.331729   63448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:06:40.332043   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.332089   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.346598   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0914 18:06:40.347021   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.347501   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.347536   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.347863   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.348042   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.380416   63448 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:06:40.381578   63448 start.go:297] selected driver: kvm2
	I0914 18:06:40.381589   63448 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.381693   63448 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:06:40.382390   63448 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.382478   63448 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:06:40.397521   63448 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:06:40.397921   63448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:06:40.397959   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:06:40.398002   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:06:40.398040   63448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.398145   63448 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.399920   63448 out.go:177] * Starting "default-k8s-diff-port-243449" primary control-plane node in "default-k8s-diff-port-243449" cluster
	I0914 18:06:39.170425   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:40.400913   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:06:40.400954   63448 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:06:40.400966   63448 cache.go:56] Caching tarball of preloaded images
	I0914 18:06:40.401038   63448 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:06:40.401055   63448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:06:40.401185   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:06:40.401421   63448 start.go:360] acquireMachinesLock for default-k8s-diff-port-243449: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:06:45.250426   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:48.322531   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:54.402441   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:57.474440   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:03.554541   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:06.626472   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:12.706430   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:15.778448   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:21.858453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:24.930473   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:31.010432   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:34.082423   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:40.162417   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:43.234501   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:49.314533   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:52.386453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:58.466444   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:01.538476   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:04.546206   62554 start.go:364] duration metric: took 3m59.524513317s to acquireMachinesLock for "embed-certs-044534"
	I0914 18:08:04.546263   62554 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:04.546275   62554 fix.go:54] fixHost starting: 
	I0914 18:08:04.546585   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:04.546636   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:04.562182   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0914 18:08:04.562704   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:04.563264   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:08:04.563300   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:04.563714   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:04.563947   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:04.564131   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:08:04.566043   62554 fix.go:112] recreateIfNeeded on embed-certs-044534: state=Stopped err=<nil>
	I0914 18:08:04.566073   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	W0914 18:08:04.566289   62554 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:04.567993   62554 out.go:177] * Restarting existing kvm2 VM for "embed-certs-044534" ...
	I0914 18:08:04.570182   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Start
	I0914 18:08:04.570431   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring networks are active...
	I0914 18:08:04.571374   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network default is active
	I0914 18:08:04.571748   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network mk-embed-certs-044534 is active
	I0914 18:08:04.572124   62554 main.go:141] libmachine: (embed-certs-044534) Getting domain xml...
	I0914 18:08:04.572852   62554 main.go:141] libmachine: (embed-certs-044534) Creating domain...
	I0914 18:08:04.540924   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:04.540957   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541310   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:08:04.541335   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541586   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:08:04.546055   62207 machine.go:96] duration metric: took 4m34.63489942s to provisionDockerMachine
	I0914 18:08:04.546096   62207 fix.go:56] duration metric: took 4m34.662932355s for fixHost
	I0914 18:08:04.546102   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 4m34.66297244s
	W0914 18:08:04.546122   62207 start.go:714] error starting host: provision: host is not running
	W0914 18:08:04.546220   62207 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 18:08:04.546231   62207 start.go:729] Will try again in 5 seconds ...
	I0914 18:08:05.812076   62554 main.go:141] libmachine: (embed-certs-044534) Waiting to get IP...
	I0914 18:08:05.812955   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:05.813302   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:05.813380   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:05.813279   63779 retry.go:31] will retry after 298.8389ms: waiting for machine to come up
	I0914 18:08:06.114130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.114575   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.114604   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.114530   63779 retry.go:31] will retry after 359.694721ms: waiting for machine to come up
	I0914 18:08:06.476183   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.476801   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.476828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.476745   63779 retry.go:31] will retry after 425.650219ms: waiting for machine to come up
	I0914 18:08:06.904358   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.904794   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.904816   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.904749   63779 retry.go:31] will retry after 433.157325ms: waiting for machine to come up
	I0914 18:08:07.339139   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.339578   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.339602   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.339512   63779 retry.go:31] will retry after 547.817102ms: waiting for machine to come up
	I0914 18:08:07.889390   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.889888   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.889993   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.889820   63779 retry.go:31] will retry after 603.749753ms: waiting for machine to come up
	I0914 18:08:08.495673   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:08.496047   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:08.496076   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:08.495995   63779 retry.go:31] will retry after 831.027535ms: waiting for machine to come up
	I0914 18:08:09.329209   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:09.329622   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:09.329643   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:09.329591   63779 retry.go:31] will retry after 1.429850518s: waiting for machine to come up
	I0914 18:08:09.548738   62207 start.go:360] acquireMachinesLock for no-preload-168587: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:10.761510   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:10.761884   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:10.761915   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:10.761839   63779 retry.go:31] will retry after 1.146619754s: waiting for machine to come up
	I0914 18:08:11.910130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:11.910542   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:11.910568   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:11.910500   63779 retry.go:31] will retry after 1.582382319s: waiting for machine to come up
	I0914 18:08:13.495352   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:13.495852   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:13.495872   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:13.495808   63779 retry.go:31] will retry after 2.117717335s: waiting for machine to come up
	I0914 18:08:15.615461   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:15.615896   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:15.615918   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:15.615846   63779 retry.go:31] will retry after 3.071486865s: waiting for machine to come up
	I0914 18:08:18.691109   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:18.691572   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:18.691605   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:18.691513   63779 retry.go:31] will retry after 4.250544955s: waiting for machine to come up
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:22.946247   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946662   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has current primary IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946687   62554 main.go:141] libmachine: (embed-certs-044534) Found IP for machine: 192.168.50.126
	I0914 18:08:22.946700   62554 main.go:141] libmachine: (embed-certs-044534) Reserving static IP address...
	I0914 18:08:22.947052   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.947068   62554 main.go:141] libmachine: (embed-certs-044534) Reserved static IP address: 192.168.50.126
	I0914 18:08:22.947080   62554 main.go:141] libmachine: (embed-certs-044534) DBG | skip adding static IP to network mk-embed-certs-044534 - found existing host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"}
	I0914 18:08:22.947093   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Getting to WaitForSSH function...
	I0914 18:08:22.947108   62554 main.go:141] libmachine: (embed-certs-044534) Waiting for SSH to be available...
	I0914 18:08:22.949354   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949623   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.949645   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949798   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH client type: external
	I0914 18:08:22.949822   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa (-rw-------)
	I0914 18:08:22.949886   62554 main.go:141] libmachine: (embed-certs-044534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:22.949911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | About to run SSH command:
	I0914 18:08:22.949926   62554 main.go:141] libmachine: (embed-certs-044534) DBG | exit 0
	I0914 18:08:23.074248   62554 main.go:141] libmachine: (embed-certs-044534) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:23.074559   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetConfigRaw
	I0914 18:08:23.075190   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.077682   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078007   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.078040   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078309   62554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:08:23.078494   62554 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:23.078510   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.078723   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.081444   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.081846   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.081891   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.082026   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.082209   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082398   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082573   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.082739   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.082961   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.082984   62554 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:23.186143   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:23.186193   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186424   62554 buildroot.go:166] provisioning hostname "embed-certs-044534"
	I0914 18:08:23.186447   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186622   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.189085   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189453   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.189482   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189615   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.189802   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190032   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190168   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.190422   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.190587   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.190601   62554 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-044534 && echo "embed-certs-044534" | sudo tee /etc/hostname
	I0914 18:08:23.307484   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-044534
	
	I0914 18:08:23.307512   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.310220   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.310664   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310764   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.310969   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311206   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311438   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.311594   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.311802   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.311820   62554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:23.422574   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:23.422603   62554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:23.422623   62554 buildroot.go:174] setting up certificates
	I0914 18:08:23.422634   62554 provision.go:84] configureAuth start
	I0914 18:08:23.422643   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.422905   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.426201   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426557   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.426584   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426745   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.428607   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.428985   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.429016   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.429138   62554 provision.go:143] copyHostCerts
	I0914 18:08:23.429198   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:23.429211   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:23.429295   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:23.429437   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:23.429452   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:23.429498   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:23.429592   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:23.429600   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:23.429626   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:23.429680   62554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044534 san=[127.0.0.1 192.168.50.126 embed-certs-044534 localhost minikube]
	I0914 18:08:23.538590   62554 provision.go:177] copyRemoteCerts
	I0914 18:08:23.538662   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:23.538689   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.541366   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541723   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.541746   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.542120   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.542303   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.542413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.623698   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:23.647378   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 18:08:23.671327   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:08:23.694570   62554 provision.go:87] duration metric: took 271.923979ms to configureAuth
	I0914 18:08:23.694598   62554 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:23.694779   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:08:23.694868   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.697467   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.697828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.697862   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.698042   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.698249   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698421   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698571   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.698692   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.698945   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.698963   62554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:23.911661   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:23.911697   62554 machine.go:96] duration metric: took 833.189197ms to provisionDockerMachine
	I0914 18:08:23.911712   62554 start.go:293] postStartSetup for "embed-certs-044534" (driver="kvm2")
	I0914 18:08:23.911726   62554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:23.911751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.912134   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:23.912169   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.914579   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.914974   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.915011   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.915121   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.915322   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.915582   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.915710   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.996910   62554 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:24.000900   62554 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:24.000926   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:24.000998   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:24.001099   62554 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:24.001222   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:24.010496   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:24.033377   62554 start.go:296] duration metric: took 121.65145ms for postStartSetup
	I0914 18:08:24.033414   62554 fix.go:56] duration metric: took 19.487140172s for fixHost
	I0914 18:08:24.033434   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.036188   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036494   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.036524   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036672   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.036886   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037082   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037216   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.037375   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:24.037542   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:24.037554   62554 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:24.142822   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337304.118879777
	
	I0914 18:08:24.142851   62554 fix.go:216] guest clock: 1726337304.118879777
	I0914 18:08:24.142862   62554 fix.go:229] Guest: 2024-09-14 18:08:24.118879777 +0000 UTC Remote: 2024-09-14 18:08:24.03341777 +0000 UTC m=+259.160200473 (delta=85.462007ms)
	I0914 18:08:24.142936   62554 fix.go:200] guest clock delta is within tolerance: 85.462007ms
	I0914 18:08:24.142960   62554 start.go:83] releasing machines lock for "embed-certs-044534", held for 19.596720856s
	I0914 18:08:24.142992   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.143262   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:24.146122   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146501   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.146537   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146711   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147204   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147430   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147532   62554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:24.147589   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.147813   62554 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:24.147839   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.150691   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.150736   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151012   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151056   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151149   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151179   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151431   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151468   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151586   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151772   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151944   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.152034   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.256821   62554 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:24.263249   62554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:24.411996   62554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:24.418685   62554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:24.418759   62554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:24.434541   62554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:24.434569   62554 start.go:495] detecting cgroup driver to use...
	I0914 18:08:24.434655   62554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:24.452550   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:24.467548   62554 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:24.467602   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:24.482556   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:24.497198   62554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:24.625300   62554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:24.805163   62554 docker.go:233] disabling docker service ...
	I0914 18:08:24.805248   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:24.821164   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:24.834886   62554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:24.963694   62554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:25.081720   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:25.097176   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:25.116611   62554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:08:25.116677   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.129500   62554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:25.129586   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.140281   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.150925   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.166139   62554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:25.177340   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.187662   62554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.207019   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.217207   62554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:25.226988   62554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:25.227065   62554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:25.248357   62554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:25.258467   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:25.375359   62554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:25.470389   62554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:25.470470   62554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:25.475526   62554 start.go:563] Will wait 60s for crictl version
	I0914 18:08:25.475589   62554 ssh_runner.go:195] Run: which crictl
	I0914 18:08:25.479131   62554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:25.530371   62554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:25.530461   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.557035   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.586883   62554 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:08:25.588117   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:25.591212   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591600   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:25.591628   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591816   62554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:25.595706   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:25.608009   62554 kubeadm.go:883] updating cluster {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:25.608141   62554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:08:25.608194   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:25.643422   62554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:08:25.643515   62554 ssh_runner.go:195] Run: which lz4
	I0914 18:08:25.647471   62554 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:25.651573   62554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:25.651607   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:08:26.985357   62554 crio.go:462] duration metric: took 1.337911722s to copy over tarball
	I0914 18:08:26.985437   62554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:29.111492   62554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126022567s)
	I0914 18:08:29.111524   62554 crio.go:469] duration metric: took 2.12613646s to extract the tarball
	I0914 18:08:29.111533   62554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:29.148426   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:29.190595   62554 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:08:29.190620   62554 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:08:29.190628   62554 kubeadm.go:934] updating node { 192.168.50.126 8443 v1.31.1 crio true true} ...
	I0914 18:08:29.190751   62554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-044534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:29.190823   62554 ssh_runner.go:195] Run: crio config
	I0914 18:08:29.234785   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:29.234808   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:29.234818   62554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:29.234871   62554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.126 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-044534 NodeName:embed-certs-044534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:08:29.234996   62554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-044534"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:29.235054   62554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:08:29.244554   62554 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:29.244631   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:29.253622   62554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 18:08:29.270046   62554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:29.285751   62554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 18:08:29.303567   62554 ssh_runner.go:195] Run: grep 192.168.50.126	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:29.307335   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:29.319510   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:29.442649   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:29.459657   62554 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534 for IP: 192.168.50.126
	I0914 18:08:29.459687   62554 certs.go:194] generating shared ca certs ...
	I0914 18:08:29.459709   62554 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:29.459908   62554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:29.459976   62554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:29.459995   62554 certs.go:256] generating profile certs ...
	I0914 18:08:29.460166   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/client.key
	I0914 18:08:29.460247   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key.15c978c5
	I0914 18:08:29.460301   62554 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key
	I0914 18:08:29.460447   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:29.460491   62554 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:29.460505   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:29.460537   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:29.460581   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:29.460605   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:29.460649   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:29.461415   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:29.501260   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:29.531940   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:29.577959   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:29.604067   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 18:08:29.635335   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:08:29.658841   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:29.684149   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:08:29.709354   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:29.733812   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:29.758427   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:29.783599   62554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:29.802188   62554 ssh_runner.go:195] Run: openssl version
	I0914 18:08:29.808277   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:29.821167   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825911   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825978   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.832160   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:29.844395   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:29.856943   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861671   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861730   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.867506   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:29.878004   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:29.890322   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.894985   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.895053   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.900837   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:29.911102   62554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:29.915546   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:29.921470   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:29.927238   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:29.933122   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:29.938829   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:29.944811   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:29.950679   62554 kubeadm.go:392] StartCluster: {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:29.950762   62554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:29.950866   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:29.987553   62554 cri.go:89] found id: ""
	I0914 18:08:29.987626   62554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:29.998690   62554 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:29.998713   62554 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:29.998765   62554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:30.009411   62554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:30.010804   62554 kubeconfig.go:125] found "embed-certs-044534" server: "https://192.168.50.126:8443"
	I0914 18:08:30.013635   62554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:30.023903   62554 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.126
	I0914 18:08:30.023937   62554 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:30.023951   62554 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:30.024017   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:30.067767   62554 cri.go:89] found id: ""
	I0914 18:08:30.067842   62554 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:30.087326   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:30.098162   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:30.098180   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:30.098218   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:30.108239   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:30.108296   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:30.118913   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:30.129091   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:30.129172   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:30.139658   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.148838   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:30.148923   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.158386   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:30.167282   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:30.167354   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:30.176443   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:30.185476   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:30.310603   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.243123   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.457657   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.531992   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.625580   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:31.625683   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.125744   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.626056   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.126817   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.146478   62554 api_server.go:72] duration metric: took 1.520896575s to wait for apiserver process to appear ...
	I0914 18:08:33.146517   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:08:33.146543   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:33.147106   62554 api_server.go:269] stopped: https://192.168.50.126:8443/healthz: Get "https://192.168.50.126:8443/healthz": dial tcp 192.168.50.126:8443: connect: connection refused
	I0914 18:08:33.646672   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:35.687169   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.687200   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:35.687221   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:35.737352   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.737410   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:36.146777   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.151156   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.151185   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:36.647380   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.655444   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.655477   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:37.146971   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:37.151233   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:08:37.160642   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:08:37.160671   62554 api_server.go:131] duration metric: took 4.014146932s to wait for apiserver health ...
	I0914 18:08:37.160679   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:37.160686   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:37.162836   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:08:37.164378   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:08:37.183377   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:08:37.210701   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:08:37.222258   62554 system_pods.go:59] 8 kube-system pods found
	I0914 18:08:37.222304   62554 system_pods.go:61] "coredns-7c65d6cfc9-59dm5" [55e67ff8-cf54-41fc-af46-160085787f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:08:37.222316   62554 system_pods.go:61] "etcd-embed-certs-044534" [932ca8e3-a777-4bb3-bdc2-6c1f1d293d4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:08:37.222331   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [f71e6720-c32c-426f-8620-b56eadf5e33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:08:37.222351   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [b93c261f-303f-43bb-8b33-4f97dc287809] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:08:37.222359   62554 system_pods.go:61] "kube-proxy-nkdth" [3762b613-c50f-4ba9-af52-371b139f9b6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:08:37.222368   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [65da2ca2-0405-4726-a2dc-dd13519c336a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:08:37.222377   62554 system_pods.go:61] "metrics-server-6867b74b74-stwfz" [ccc73057-4710-4e41-b643-d793d9b01175] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:08:37.222393   62554 system_pods.go:61] "storage-provisioner" [660fd3e3-ce57-4275-9fe1-bcceba75d8a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:08:37.222405   62554 system_pods.go:74] duration metric: took 11.676128ms to wait for pod list to return data ...
	I0914 18:08:37.222420   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:08:37.227047   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:08:37.227087   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:08:37.227104   62554 node_conditions.go:105] duration metric: took 4.678826ms to run NodePressure ...
	I0914 18:08:37.227124   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:37.510868   62554 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515839   62554 kubeadm.go:739] kubelet initialised
	I0914 18:08:37.515863   62554 kubeadm.go:740] duration metric: took 4.967389ms waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515871   62554 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:08:37.520412   62554 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:39.528469   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:42.028017   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:44.526502   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:45.103061   63448 start.go:364] duration metric: took 2m4.701591278s to acquireMachinesLock for "default-k8s-diff-port-243449"
	I0914 18:08:45.103116   63448 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:45.103124   63448 fix.go:54] fixHost starting: 
	I0914 18:08:45.103555   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:45.103626   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:45.120496   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0914 18:08:45.121098   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:45.122023   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:08:45.122050   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:45.122440   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:45.122631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:08:45.122792   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:08:45.124473   63448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-243449: state=Stopped err=<nil>
	I0914 18:08:45.124500   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	W0914 18:08:45.124633   63448 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:45.126255   63448 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-243449" ...
	I0914 18:08:45.127296   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Start
	I0914 18:08:45.127469   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring networks are active...
	I0914 18:08:45.128415   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network default is active
	I0914 18:08:45.128823   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network mk-default-k8s-diff-port-243449 is active
	I0914 18:08:45.129257   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Getting domain xml...
	I0914 18:08:45.130055   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Creating domain...
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.027201   62554 pod_ready.go:93] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:46.027223   62554 pod_ready.go:82] duration metric: took 8.506784658s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.027232   62554 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043468   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.043499   62554 pod_ready.go:82] duration metric: took 1.016259668s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043513   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050825   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.050853   62554 pod_ready.go:82] duration metric: took 7.332421ms for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050869   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561389   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.561419   62554 pod_ready.go:82] duration metric: took 510.541663ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561434   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568265   62554 pod_ready.go:93] pod "kube-proxy-nkdth" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.568298   62554 pod_ready.go:82] duration metric: took 6.854878ms for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568312   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575898   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:48.575924   62554 pod_ready.go:82] duration metric: took 1.00760412s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575934   62554 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.464001   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting to get IP...
	I0914 18:08:46.465004   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465408   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465512   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.465391   64066 retry.go:31] will retry after 283.185405ms: waiting for machine to come up
	I0914 18:08:46.751155   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751669   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751697   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.751622   64066 retry.go:31] will retry after 307.273139ms: waiting for machine to come up
	I0914 18:08:47.060812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061855   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061889   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.061749   64066 retry.go:31] will retry after 420.077307ms: waiting for machine to come up
	I0914 18:08:47.483188   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483611   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483656   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.483567   64066 retry.go:31] will retry after 562.15435ms: waiting for machine to come up
	I0914 18:08:48.047428   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047971   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.047867   64066 retry.go:31] will retry after 744.523152ms: waiting for machine to come up
	I0914 18:08:48.793959   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794449   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794492   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.794393   64066 retry.go:31] will retry after 813.631617ms: waiting for machine to come up
	I0914 18:08:49.609483   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:49.609904   64066 retry.go:31] will retry after 941.244861ms: waiting for machine to come up
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:50.583860   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:52.826559   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:50.553210   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553735   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553764   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:50.553671   64066 retry.go:31] will retry after 1.107692241s: waiting for machine to come up
	I0914 18:08:51.663218   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663723   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663753   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:51.663681   64066 retry.go:31] will retry after 1.357435642s: waiting for machine to come up
	I0914 18:08:53.022246   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022695   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022726   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:53.022628   64066 retry.go:31] will retry after 2.045434586s: waiting for machine to come up
	I0914 18:08:55.070946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071420   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:55.071362   64066 retry.go:31] will retry after 2.084823885s: waiting for machine to come up
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.084673   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.582709   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:59.583340   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.158301   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158679   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158705   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:57.158640   64066 retry.go:31] will retry after 2.492994369s: waiting for machine to come up
	I0914 18:08:59.654137   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654550   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654585   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:59.654496   64066 retry.go:31] will retry after 3.393327124s: waiting for machine to come up
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.466791   62207 start.go:364] duration metric: took 54.917996405s to acquireMachinesLock for "no-preload-168587"
	I0914 18:09:04.466845   62207 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:09:04.466863   62207 fix.go:54] fixHost starting: 
	I0914 18:09:04.467265   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:04.467303   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:04.485295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 18:09:04.485680   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:04.486195   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:09:04.486221   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:04.486625   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:04.486825   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:04.486985   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:09:04.488546   62207 fix.go:112] recreateIfNeeded on no-preload-168587: state=Stopped err=<nil>
	I0914 18:09:04.488584   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	W0914 18:09:04.488749   62207 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:09:04.491638   62207 out.go:177] * Restarting existing kvm2 VM for "no-preload-168587" ...
	I0914 18:09:02.082684   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:04.582135   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:03.051442   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051882   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has current primary IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051904   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Found IP for machine: 192.168.61.38
	I0914 18:09:03.051946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserving static IP address...
	I0914 18:09:03.052245   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.052269   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | skip adding static IP to network mk-default-k8s-diff-port-243449 - found existing host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"}
	I0914 18:09:03.052280   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserved static IP address: 192.168.61.38
	I0914 18:09:03.052289   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for SSH to be available...
	I0914 18:09:03.052306   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Getting to WaitForSSH function...
	I0914 18:09:03.054154   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054555   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.054596   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054745   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH client type: external
	I0914 18:09:03.054777   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa (-rw-------)
	I0914 18:09:03.054813   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:03.054828   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | About to run SSH command:
	I0914 18:09:03.054841   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | exit 0
	I0914 18:09:03.178065   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:03.178576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetConfigRaw
	I0914 18:09:03.179198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.181829   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182220   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.182242   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182541   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:09:03.182773   63448 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:03.182796   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:03.182992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.185635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186027   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.186056   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186213   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.186416   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186602   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186756   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.186882   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.187123   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.187137   63448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:03.290288   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:03.290332   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290592   63448 buildroot.go:166] provisioning hostname "default-k8s-diff-port-243449"
	I0914 18:09:03.290620   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290779   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.293587   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.293981   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.294012   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.294120   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.294307   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.294708   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.294926   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.294944   63448 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-243449 && echo "default-k8s-diff-port-243449" | sudo tee /etc/hostname
	I0914 18:09:03.418148   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-243449
	
	I0914 18:09:03.418198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.421059   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421501   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.421536   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421733   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.421925   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422075   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.422394   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.422581   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.422609   63448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-243449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-243449/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-243449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:03.538785   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:03.538812   63448 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:03.538851   63448 buildroot.go:174] setting up certificates
	I0914 18:09:03.538866   63448 provision.go:84] configureAuth start
	I0914 18:09:03.538875   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.539230   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.541811   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542129   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.542183   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542393   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.544635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.544933   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.544969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.545099   63448 provision.go:143] copyHostCerts
	I0914 18:09:03.545156   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:03.545167   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:03.545239   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:03.545362   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:03.545374   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:03.545410   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:03.545489   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:03.545498   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:03.545533   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:03.545619   63448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-243449 san=[127.0.0.1 192.168.61.38 default-k8s-diff-port-243449 localhost minikube]
	I0914 18:09:03.858341   63448 provision.go:177] copyRemoteCerts
	I0914 18:09:03.858415   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:03.858453   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.861376   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.861687   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861863   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.862062   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.862231   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.862359   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:03.944043   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:03.968175   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 18:09:03.990621   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:09:04.012163   63448 provision.go:87] duration metric: took 473.28607ms to configureAuth
	I0914 18:09:04.012190   63448 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:04.012364   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:04.012431   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.015021   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015505   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.015553   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015693   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.015866   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016035   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016157   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.016277   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.016479   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.016511   63448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:04.234672   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:04.234697   63448 machine.go:96] duration metric: took 1.051909541s to provisionDockerMachine
	I0914 18:09:04.234710   63448 start.go:293] postStartSetup for "default-k8s-diff-port-243449" (driver="kvm2")
	I0914 18:09:04.234721   63448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:04.234766   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.235108   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:04.235139   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.237583   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.237964   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.237997   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.238237   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.238491   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.238667   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.238798   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.320785   63448 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:04.324837   63448 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:04.324863   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:04.324920   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:04.325001   63448 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:04.325091   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:04.334235   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:04.357310   63448 start.go:296] duration metric: took 122.582935ms for postStartSetup
	I0914 18:09:04.357352   63448 fix.go:56] duration metric: took 19.25422843s for fixHost
	I0914 18:09:04.357373   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.360190   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360574   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.360601   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360774   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.360973   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361163   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361291   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.361479   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.361658   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.361667   63448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:04.466610   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337344.436836920
	
	I0914 18:09:04.466654   63448 fix.go:216] guest clock: 1726337344.436836920
	I0914 18:09:04.466665   63448 fix.go:229] Guest: 2024-09-14 18:09:04.43683692 +0000 UTC Remote: 2024-09-14 18:09:04.357356624 +0000 UTC m=+144.091633354 (delta=79.480296ms)
	I0914 18:09:04.466691   63448 fix.go:200] guest clock delta is within tolerance: 79.480296ms
	I0914 18:09:04.466702   63448 start.go:83] releasing machines lock for "default-k8s-diff-port-243449", held for 19.363604776s
	I0914 18:09:04.466737   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.466992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:04.469873   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470148   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.470198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470364   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.470877   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471098   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471215   63448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:04.471270   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.471322   63448 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:04.471346   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.474023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474144   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474374   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474471   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474616   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474637   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.474816   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474996   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474987   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.475136   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.475274   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.587233   63448 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:04.593065   63448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:04.738721   63448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:04.745472   63448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:04.745539   63448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:04.765742   63448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:04.765806   63448 start.go:495] detecting cgroup driver to use...
	I0914 18:09:04.765909   63448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:04.782234   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:04.797259   63448 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:04.797322   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:04.811794   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:04.826487   63448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:04.953417   63448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:05.102410   63448 docker.go:233] disabling docker service ...
	I0914 18:09:05.102491   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:05.117443   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:05.131147   63448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:05.278483   63448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.401195   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:05.415794   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:05.434594   63448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:05.434660   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.445566   63448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:05.445643   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.456690   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.468044   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.479719   63448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:05.491019   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.501739   63448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.520582   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.531469   63448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:05.541741   63448 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:05.541809   63448 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:05.561648   63448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:05.571882   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:05.706592   63448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:05.811522   63448 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:05.811599   63448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:05.816676   63448 start.go:563] Will wait 60s for crictl version
	I0914 18:09:05.816745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:09:05.820367   63448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:05.862564   63448 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:05.862637   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.893106   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.927136   63448 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:04.492847   62207 main.go:141] libmachine: (no-preload-168587) Calling .Start
	I0914 18:09:04.493070   62207 main.go:141] libmachine: (no-preload-168587) Ensuring networks are active...
	I0914 18:09:04.493844   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network default is active
	I0914 18:09:04.494193   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network mk-no-preload-168587 is active
	I0914 18:09:04.494614   62207 main.go:141] libmachine: (no-preload-168587) Getting domain xml...
	I0914 18:09:04.495434   62207 main.go:141] libmachine: (no-preload-168587) Creating domain...
	I0914 18:09:05.801470   62207 main.go:141] libmachine: (no-preload-168587) Waiting to get IP...
	I0914 18:09:05.802621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:05.803215   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:05.803351   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:05.803229   64231 retry.go:31] will retry after 206.528002ms: waiting for machine to come up
	I0914 18:09:06.011556   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.012027   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.012063   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.011977   64231 retry.go:31] will retry after 252.283679ms: waiting for machine to come up
	I0914 18:09:06.266621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.267145   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.267178   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.267093   64231 retry.go:31] will retry after 376.426781ms: waiting for machine to come up
	I0914 18:09:06.644639   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.645212   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.645245   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.645161   64231 retry.go:31] will retry after 518.904946ms: waiting for machine to come up
	I0914 18:09:06.584604   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:09.085179   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:05.928171   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:05.931131   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931584   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:05.931662   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931826   63448 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:05.935729   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:05.947741   63448 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:05.947872   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:05.947935   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:05.984371   63448 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:05.984473   63448 ssh_runner.go:195] Run: which lz4
	I0914 18:09:05.988311   63448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:09:05.992088   63448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:09:05.992123   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:09:07.311157   63448 crio.go:462] duration metric: took 1.322885925s to copy over tarball
	I0914 18:09:07.311297   63448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:09:09.472639   63448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161311106s)
	I0914 18:09:09.472663   63448 crio.go:469] duration metric: took 2.161473132s to extract the tarball
	I0914 18:09:09.472670   63448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:09:09.508740   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:09.554508   63448 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:09:09.554533   63448 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:09:09.554548   63448 kubeadm.go:934] updating node { 192.168.61.38 8444 v1.31.1 crio true true} ...
	I0914 18:09:09.554657   63448 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-243449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:09.554722   63448 ssh_runner.go:195] Run: crio config
	I0914 18:09:09.603693   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:09.603715   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:09.603727   63448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:09.603745   63448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-243449 NodeName:default-k8s-diff-port-243449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:09.603879   63448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-243449"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:09.603935   63448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:09.613786   63448 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:09.613863   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:09.623172   63448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0914 18:09:09.641437   63448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:09.657677   63448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:09:09.675042   63448 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:09.678885   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:09.694466   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:09.823504   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:09.840638   63448 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449 for IP: 192.168.61.38
	I0914 18:09:09.840658   63448 certs.go:194] generating shared ca certs ...
	I0914 18:09:09.840677   63448 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:09.840827   63448 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:09.840869   63448 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:09.840879   63448 certs.go:256] generating profile certs ...
	I0914 18:09:09.841046   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/client.key
	I0914 18:09:09.841147   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key.68770133
	I0914 18:09:09.841231   63448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key
	I0914 18:09:09.841342   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:09.841370   63448 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:09.841377   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:09.841398   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:09.841425   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:09.841447   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:09.841499   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:09.842211   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:09.883406   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:09.914134   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:09.941343   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:09.990870   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 18:09:10.040821   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:10.065238   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:10.089901   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:09:10.114440   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:10.138963   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:10.162828   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:10.185702   63448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:10.201251   63448 ssh_runner.go:195] Run: openssl version
	I0914 18:09:10.206904   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:10.216966   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221437   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221506   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.227033   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:10.237039   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:10.247244   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251434   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251494   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.257187   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:10.267490   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:10.277622   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281705   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281789   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.287013   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:10.296942   63448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.165576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.166187   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.166219   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.166125   64231 retry.go:31] will retry after 631.376012ms: waiting for machine to come up
	I0914 18:09:07.798978   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.799450   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.799478   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.799404   64231 retry.go:31] will retry after 668.764795ms: waiting for machine to come up
	I0914 18:09:08.470207   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:08.470613   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:08.470640   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:08.470559   64231 retry.go:31] will retry after 943.595216ms: waiting for machine to come up
	I0914 18:09:09.415274   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:09.415721   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:09.415751   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:09.415675   64231 retry.go:31] will retry after 956.638818ms: waiting for machine to come up
	I0914 18:09:10.374297   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:10.374875   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:10.374902   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:10.374822   64231 retry.go:31] will retry after 1.703915418s: waiting for machine to come up
	I0914 18:09:11.583370   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:14.082919   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:10.301352   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:10.307276   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:10.313391   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:10.319883   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:10.325671   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:10.331445   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:10.336855   63448 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:10.336953   63448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:10.337019   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.372899   63448 cri.go:89] found id: ""
	I0914 18:09:10.372988   63448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:10.386897   63448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:10.386920   63448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:10.386978   63448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:10.399165   63448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:10.400212   63448 kubeconfig.go:125] found "default-k8s-diff-port-243449" server: "https://192.168.61.38:8444"
	I0914 18:09:10.402449   63448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:10.414129   63448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.38
	I0914 18:09:10.414192   63448 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:10.414207   63448 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:10.414276   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.454549   63448 cri.go:89] found id: ""
	I0914 18:09:10.454627   63448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:10.472261   63448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:10.481693   63448 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:10.481724   63448 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:10.481772   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 18:09:10.492205   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:10.492283   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:10.502923   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 18:09:10.511620   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:10.511688   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:10.520978   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.529590   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:10.529652   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.538602   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 18:09:10.546968   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:10.547037   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:10.556280   63448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:10.565471   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:10.670297   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.611646   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.858308   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.942761   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:12.018144   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:12.018251   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.518933   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.019098   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.518297   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.018327   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.033874   63448 api_server.go:72] duration metric: took 2.015718891s to wait for apiserver process to appear ...
	I0914 18:09:14.033902   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:14.033926   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:14.034534   63448 api_server.go:269] stopped: https://192.168.61.38:8444/healthz: Get "https://192.168.61.38:8444/healthz": dial tcp 192.168.61.38:8444: connect: connection refused
	I0914 18:09:14.534065   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.080547   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:12.081149   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:12.081174   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:12.081095   64231 retry.go:31] will retry after 1.634645735s: waiting for machine to come up
	I0914 18:09:13.717239   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:13.717787   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:13.717821   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:13.717731   64231 retry.go:31] will retry after 2.524549426s: waiting for machine to come up
	I0914 18:09:16.244729   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:16.245132   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:16.245162   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:16.245072   64231 retry.go:31] will retry after 2.539965892s: waiting for machine to come up
	I0914 18:09:16.083603   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:18.581965   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:16.427071   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.427109   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.427156   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.440812   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.440848   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.534060   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.593356   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:16.593412   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.034545   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.039094   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.039131   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.534668   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.543018   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.543053   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.034612   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.039042   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.039071   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.534675   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.540612   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.540637   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.034196   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.040397   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.040429   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.535035   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.540910   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.540940   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:20.034275   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:20.038541   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:09:20.044704   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:20.044734   63448 api_server.go:131] duration metric: took 6.010822563s to wait for apiserver health ...
	I0914 18:09:20.044744   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:20.044752   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:20.046616   63448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:20.047724   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:20.058152   63448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:20.077880   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:20.090089   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:20.090135   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:20.090148   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:20.090178   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:20.090192   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:20.090199   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:09:20.090210   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:20.090219   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:20.090226   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:09:20.090236   63448 system_pods.go:74] duration metric: took 12.327834ms to wait for pod list to return data ...
	I0914 18:09:20.090248   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:20.094429   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:20.094455   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:20.094468   63448 node_conditions.go:105] duration metric: took 4.21448ms to run NodePressure ...
	I0914 18:09:20.094486   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.357111   63448 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361447   63448 kubeadm.go:739] kubelet initialised
	I0914 18:09:20.361469   63448 kubeadm.go:740] duration metric: took 4.331134ms waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361479   63448 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:20.367027   63448 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.371669   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371697   63448 pod_ready.go:82] duration metric: took 4.644689ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.371706   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371714   63448 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.376461   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376486   63448 pod_ready.go:82] duration metric: took 4.764316ms for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.376497   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376506   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.380607   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380632   63448 pod_ready.go:82] duration metric: took 4.117696ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.380642   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380649   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.481883   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481920   63448 pod_ready.go:82] duration metric: took 101.262101ms for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.481935   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481965   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.881501   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881541   63448 pod_ready.go:82] duration metric: took 399.559576ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.881556   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881566   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.282414   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282446   63448 pod_ready.go:82] duration metric: took 400.860884ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.282463   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282472   63448 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.681717   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681757   63448 pod_ready.go:82] duration metric: took 399.273892ms for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.681773   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681783   63448 pod_ready.go:39] duration metric: took 1.320292845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:21.681825   63448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:09:21.693644   63448 ops.go:34] apiserver oom_adj: -16
	I0914 18:09:21.693682   63448 kubeadm.go:597] duration metric: took 11.306754096s to restartPrimaryControlPlane
	I0914 18:09:21.693696   63448 kubeadm.go:394] duration metric: took 11.356851178s to StartCluster
	I0914 18:09:21.693719   63448 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.693820   63448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:09:21.695521   63448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.695793   63448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:09:21.695903   63448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:09:21.695982   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:21.696003   63448 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696021   63448 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696029   63448 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696041   63448 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:09:21.696044   63448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-243449"
	I0914 18:09:21.696063   63448 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696094   63448 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696108   63448 addons.go:243] addon metrics-server should already be in state true
	I0914 18:09:21.696149   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696074   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696411   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696455   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696543   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696562   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696693   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696735   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.697719   63448 out.go:177] * Verifying Kubernetes components...
	I0914 18:09:21.699171   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:21.712479   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0914 18:09:21.712563   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0914 18:09:21.713050   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713065   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713585   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713601   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713613   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713633   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713940   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714122   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.714135   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714737   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.714789   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.716503   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 18:09:21.716977   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.717490   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.717514   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.717872   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.718055   63448 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.718075   63448 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:09:21.718105   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.718432   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718484   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.718491   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718527   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.737248   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0914 18:09:21.738874   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.739437   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.739460   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.739865   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.740121   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.742251   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.744281   63448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:21.745631   63448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:21.745656   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:09:21.745682   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.749856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750398   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.750424   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.750886   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.751029   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.751187   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.756605   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0914 18:09:21.756825   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0914 18:09:21.757040   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757293   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757562   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.757588   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758058   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.758301   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.758322   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758325   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.758717   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.759300   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.759342   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.760557   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.762845   63448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:09:18.787883   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:18.788270   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:18.788298   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:18.788225   64231 retry.go:31] will retry after 4.53698887s: waiting for machine to come up
	I0914 18:09:21.764071   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:09:21.764092   63448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:09:21.764116   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.767725   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768255   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.768367   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768503   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.768681   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.768856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.769030   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.776783   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0914 18:09:21.777226   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.777736   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.777754   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.778113   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.778345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.780215   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.780421   63448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:21.780436   63448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:09:21.780458   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.783243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783671   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.783698   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783857   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.784023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.784138   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.784256   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.919649   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:21.945515   63448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:22.020487   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:09:22.020509   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:09:22.041265   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:22.072169   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:09:22.072199   63448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:09:22.112117   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.112148   63448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:09:22.146636   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:22.162248   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.520416   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520448   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.520793   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.520815   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.520831   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520833   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.520840   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.521074   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.521119   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.527992   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.528030   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.528578   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.528581   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.528605   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246463   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084175525s)
	I0914 18:09:23.246520   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246535   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246564   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099889297s)
	I0914 18:09:23.246609   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246621   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246835   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246876   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.246888   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246897   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246910   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246958   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247002   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247021   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.247046   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.247156   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.247192   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247227   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247260   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-243449"
	I0914 18:09:23.250385   63448 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 18:09:20.583600   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.083187   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.251609   63448 addons.go:510] duration metric: took 1.555716144s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 18:09:23.949715   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.327287   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327775   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has current primary IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327803   62207 main.go:141] libmachine: (no-preload-168587) Found IP for machine: 192.168.39.38
	I0914 18:09:23.327822   62207 main.go:141] libmachine: (no-preload-168587) Reserving static IP address...
	I0914 18:09:23.328197   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.328221   62207 main.go:141] libmachine: (no-preload-168587) Reserved static IP address: 192.168.39.38
	I0914 18:09:23.328264   62207 main.go:141] libmachine: (no-preload-168587) DBG | skip adding static IP to network mk-no-preload-168587 - found existing host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"}
	I0914 18:09:23.328283   62207 main.go:141] libmachine: (no-preload-168587) DBG | Getting to WaitForSSH function...
	I0914 18:09:23.328295   62207 main.go:141] libmachine: (no-preload-168587) Waiting for SSH to be available...
	I0914 18:09:23.330598   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.330954   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.330985   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.331105   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH client type: external
	I0914 18:09:23.331132   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa (-rw-------)
	I0914 18:09:23.331184   62207 main.go:141] libmachine: (no-preload-168587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:23.331196   62207 main.go:141] libmachine: (no-preload-168587) DBG | About to run SSH command:
	I0914 18:09:23.331208   62207 main.go:141] libmachine: (no-preload-168587) DBG | exit 0
	I0914 18:09:23.454525   62207 main.go:141] libmachine: (no-preload-168587) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:23.454883   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetConfigRaw
	I0914 18:09:23.455505   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.457696   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458030   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.458069   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458372   62207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:09:23.458611   62207 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:23.458633   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:23.458828   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.461199   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461540   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.461576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461705   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.461895   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462006   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462153   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.462314   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.462477   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.462488   62207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:23.566278   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:23.566310   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566559   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:09:23.566581   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566742   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.569254   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569590   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.569617   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569713   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.569888   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570009   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570174   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.570344   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.570556   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.570575   62207 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-168587 && echo "no-preload-168587" | sudo tee /etc/hostname
	I0914 18:09:23.687805   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-168587
	
	I0914 18:09:23.687848   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.690447   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.690824   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690955   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.691135   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691279   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691427   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.691590   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.691768   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.691790   62207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-168587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-168587/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-168587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:23.805502   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:23.805527   62207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:23.805545   62207 buildroot.go:174] setting up certificates
	I0914 18:09:23.805553   62207 provision.go:84] configureAuth start
	I0914 18:09:23.805561   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.805798   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.808306   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808643   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.808668   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808819   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.811055   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811374   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.811401   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811586   62207 provision.go:143] copyHostCerts
	I0914 18:09:23.811647   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:23.811657   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:23.811712   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:23.811800   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:23.811808   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:23.811829   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:23.811880   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:23.811887   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:23.811908   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:23.811956   62207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.no-preload-168587 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-168587]
	I0914 18:09:24.051868   62207 provision.go:177] copyRemoteCerts
	I0914 18:09:24.051936   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:24.051958   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.054842   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055107   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.055138   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055321   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.055514   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.055664   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.055804   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.140378   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:24.168422   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:09:24.194540   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:09:24.217910   62207 provision.go:87] duration metric: took 412.343545ms to configureAuth
	I0914 18:09:24.217942   62207 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:24.218180   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:24.218255   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.220788   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221216   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.221259   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221408   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.221678   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.221842   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.222033   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.222218   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.222399   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.222417   62207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:24.433203   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:24.433230   62207 machine.go:96] duration metric: took 974.605605ms to provisionDockerMachine
	I0914 18:09:24.433241   62207 start.go:293] postStartSetup for "no-preload-168587" (driver="kvm2")
	I0914 18:09:24.433253   62207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:24.433282   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.433595   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:24.433625   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.436247   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436710   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.436746   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436855   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.437015   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.437189   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.437305   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.516493   62207 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:24.520486   62207 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:24.520518   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:24.520612   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:24.520687   62207 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:24.520775   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:24.530274   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:24.553381   62207 start.go:296] duration metric: took 120.123302ms for postStartSetup
	I0914 18:09:24.553422   62207 fix.go:56] duration metric: took 20.086564499s for fixHost
	I0914 18:09:24.553445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.555832   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556100   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.556133   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556376   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.556605   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556772   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556922   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.557062   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.557275   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.557285   62207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:24.659101   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337364.632455119
	
	I0914 18:09:24.659128   62207 fix.go:216] guest clock: 1726337364.632455119
	I0914 18:09:24.659139   62207 fix.go:229] Guest: 2024-09-14 18:09:24.632455119 +0000 UTC Remote: 2024-09-14 18:09:24.553426386 +0000 UTC m=+357.567907862 (delta=79.028733ms)
	I0914 18:09:24.659165   62207 fix.go:200] guest clock delta is within tolerance: 79.028733ms
	I0914 18:09:24.659171   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 20.192350446s
	I0914 18:09:24.659209   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.659445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:24.662626   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663051   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.663082   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663225   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663802   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663972   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.664063   62207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:24.664114   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.664195   62207 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:24.664221   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.666971   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667255   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667398   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667433   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667555   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.667753   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.667787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667816   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667913   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.667988   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.668058   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.668109   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.668236   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.668356   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.743805   62207 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:24.776583   62207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:24.924635   62207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:24.930891   62207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:24.930979   62207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:24.952228   62207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:24.952258   62207 start.go:495] detecting cgroup driver to use...
	I0914 18:09:24.952344   62207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:24.967770   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:24.983218   62207 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:24.983280   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:24.997311   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:25.011736   62207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:25.135920   62207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:25.323727   62207 docker.go:233] disabling docker service ...
	I0914 18:09:25.323793   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:25.341243   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:25.358703   62207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:25.495826   62207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:25.621684   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:25.637386   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:25.655826   62207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:25.655947   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.669204   62207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:25.669266   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.680265   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.690860   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.702002   62207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:25.713256   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.724125   62207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.742195   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.752680   62207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:25.762842   62207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:25.762920   62207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:25.775680   62207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:25.785190   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:25.907175   62207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:25.995654   62207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:25.995731   62207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:26.000829   62207 start.go:563] Will wait 60s for crictl version
	I0914 18:09:26.000896   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.004522   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:26.041674   62207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:26.041745   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.069091   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.107475   62207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:26.108650   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:26.111782   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112110   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:26.112139   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112279   62207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:26.116339   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:26.128616   62207 kubeadm.go:883] updating cluster {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:26.128755   62207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:26.128796   62207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:26.165175   62207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:26.165197   62207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:09:26.165282   62207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.165301   62207 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 18:09:26.165302   62207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.165276   62207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.165346   62207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.165309   62207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.165443   62207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.165451   62207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.166853   62207 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 18:09:26.166858   62207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.166864   62207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.166873   62207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.166911   62207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.166928   62207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.366393   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.398127   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 18:09:26.401173   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.405861   62207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 18:09:26.405910   62207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.405982   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.410513   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.411414   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.416692   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.417710   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643066   62207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 18:09:26.643114   62207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.643177   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643195   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.643242   62207 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 18:09:26.643278   62207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 18:09:26.643293   62207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 18:09:26.643282   62207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.643307   62207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.643323   62207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.643328   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643351   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643366   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643386   62207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 18:09:26.643412   62207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643436   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.654984   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.655035   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.733881   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.733967   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.769624   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.778708   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.778836   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.778855   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.821344   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.821358   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.899012   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.906693   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.909875   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.916458   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.944355   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.949250   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 18:09:26.949400   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:25.582447   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:28.084142   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:25.949851   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:26.950390   63448 node_ready.go:49] node "default-k8s-diff-port-243449" has status "Ready":"True"
	I0914 18:09:26.950418   63448 node_ready.go:38] duration metric: took 5.004868966s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:26.950430   63448 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:26.956875   63448 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963909   63448 pod_ready.go:93] pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:26.963935   63448 pod_ready.go:82] duration metric: took 7.027533ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963945   63448 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971297   63448 pod_ready.go:93] pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.971327   63448 pod_ready.go:82] duration metric: took 2.007374825s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971340   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977510   63448 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.977535   63448 pod_ready.go:82] duration metric: took 6.18573ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977557   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.035840   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 18:09:27.035956   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:27.040828   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 18:09:27.040939   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 18:09:27.040941   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:27.041026   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:27.048278   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 18:09:27.048345   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 18:09:27.048388   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:27.048390   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 18:09:27.048446   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048423   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 18:09:27.048482   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048431   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:27.052221   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 18:09:27.052401   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 18:09:27.052585   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 18:09:27.330779   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.721998   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.673483443s)
	I0914 18:09:29.722035   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 18:09:29.722064   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722076   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.673496811s)
	I0914 18:09:29.722112   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 18:09:29.722112   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722194   62207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.391387893s)
	I0914 18:09:29.722236   62207 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 18:09:29.722257   62207 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.722297   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:31.485714   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.76356866s)
	I0914 18:09:31.485744   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 18:09:31.485764   62207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485817   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485820   62207 ssh_runner.go:235] Completed: which crictl: (1.763506603s)
	I0914 18:09:31.485862   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:30.583013   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:33.083597   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.985230   63448 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:31.984182   63448 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.984203   63448 pod_ready.go:82] duration metric: took 3.006637599s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.984212   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989786   63448 pod_ready.go:93] pod "kube-proxy-gbkqm" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.989812   63448 pod_ready.go:82] duration metric: took 5.592466ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989823   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994224   63448 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.994246   63448 pod_ready.go:82] duration metric: took 4.414059ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994258   63448 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:34.001035   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.781678   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.295763296s)
	I0914 18:09:34.781783   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:34.781814   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.295968995s)
	I0914 18:09:34.781840   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 18:09:34.781868   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:34.781900   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:36.744459   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.962646981s)
	I0914 18:09:36.744514   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.962587733s)
	I0914 18:09:36.744551   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 18:09:36.744576   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:36.744590   62207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:36.744658   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:35.582596   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.083260   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:36.002284   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.002962   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.848091   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.103407014s)
	I0914 18:09:38.848126   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 18:09:38.848152   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848217   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848153   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.103554199s)
	I0914 18:09:38.848283   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:09:38.848368   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307247   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.459002378s)
	I0914 18:09:40.307287   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 18:09:40.307269   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458886581s)
	I0914 18:09:40.307327   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 18:09:40.307334   62207 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307382   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.958177   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 18:09:40.958222   62207 cache_images.go:123] Successfully loaded all cached images
	I0914 18:09:40.958228   62207 cache_images.go:92] duration metric: took 14.793018447s to LoadCachedImages
	I0914 18:09:40.958241   62207 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.1 crio true true} ...
	I0914 18:09:40.958347   62207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-168587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:40.958415   62207 ssh_runner.go:195] Run: crio config
	I0914 18:09:41.003620   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:41.003643   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:41.003653   62207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:41.003674   62207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-168587 NodeName:no-preload-168587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:41.003850   62207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-168587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:41.003920   62207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:41.014462   62207 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:41.014541   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:41.023964   62207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 18:09:41.040206   62207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:41.055630   62207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 18:09:41.072881   62207 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:41.076449   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:41.090075   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:41.210405   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:41.228173   62207 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587 for IP: 192.168.39.38
	I0914 18:09:41.228197   62207 certs.go:194] generating shared ca certs ...
	I0914 18:09:41.228213   62207 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:41.228383   62207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:41.228443   62207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:41.228457   62207 certs.go:256] generating profile certs ...
	I0914 18:09:41.228586   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.key
	I0914 18:09:41.228667   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key.d11ec263
	I0914 18:09:41.228731   62207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key
	I0914 18:09:41.228889   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:41.228932   62207 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:41.228944   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:41.228976   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:41.229008   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:41.229045   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:41.229102   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:41.229913   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:41.259871   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:41.286359   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:41.315410   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:41.345541   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:09:41.380128   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:41.411130   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:41.442136   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:09:41.464823   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:41.488153   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:41.513788   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:41.537256   62207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:41.553550   62207 ssh_runner.go:195] Run: openssl version
	I0914 18:09:41.559366   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:41.571498   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576889   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576947   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.583651   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:41.594743   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:41.605811   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610034   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610103   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.615810   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:41.627145   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:41.639956   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644647   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644705   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.650281   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:41.662354   62207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:41.667150   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:41.673263   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:41.680660   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:41.687283   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:41.693256   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:41.698969   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:41.704543   62207 kubeadm.go:392] StartCluster: {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:41.704671   62207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:41.704750   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.741255   62207 cri.go:89] found id: ""
	I0914 18:09:41.741354   62207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:41.751360   62207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:41.751377   62207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:41.751417   62207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:41.761492   62207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:41.762591   62207 kubeconfig.go:125] found "no-preload-168587" server: "https://192.168.39.38:8443"
	I0914 18:09:41.764876   62207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:41.774868   62207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0914 18:09:41.774901   62207 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:41.774913   62207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:41.774969   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.810189   62207 cri.go:89] found id: ""
	I0914 18:09:41.810248   62207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:41.827903   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:41.837504   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:41.837532   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:41.837585   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:09:41.846260   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:41.846322   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:41.855350   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:09:41.864096   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:41.864153   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:41.874772   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.885427   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:41.885502   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.897121   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:09:41.906955   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:41.907020   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:41.918253   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:41.930134   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:40.084800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:42.581757   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:44.583611   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.502272   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:43.001471   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.054830   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.754174   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.973037   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.043041   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.119704   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:43.119805   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.620541   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.120849   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.139382   62207 api_server.go:72] duration metric: took 1.019679094s to wait for apiserver process to appear ...
	I0914 18:09:44.139406   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:44.139424   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:44.139876   62207 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0914 18:09:44.639981   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.262096   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.262132   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.262151   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.280626   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.280652   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.640152   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.646640   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:47.646676   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.140256   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.145520   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:48.145557   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.640147   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.645032   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:09:48.654567   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:48.654600   62207 api_server.go:131] duration metric: took 4.515188826s to wait for apiserver health ...
	I0914 18:09:48.654609   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:48.654615   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:48.656828   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:47.082431   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:49.582001   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.500938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:48.002332   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.658151   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:48.692232   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:48.734461   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:48.746689   62207 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:48.746723   62207 system_pods.go:61] "coredns-7c65d6cfc9-mwhvh" [38800077-a7ff-4c8c-8375-4efac2ae40b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:48.746733   62207 system_pods.go:61] "etcd-no-preload-168587" [bdb166bb-8c07-448c-a97c-2146e84f139b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:48.746744   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [8ad59d56-cb86-4028-bf16-3733eb32ad8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:48.746752   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [fd66d0aa-cc35-4330-aa6b-571dbeaa6490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:48.746761   62207 system_pods.go:61] "kube-proxy-lvp9h" [75c154d8-c76d-49eb-9497-dd17199e9d20] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:09:48.746771   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [858c948b-9025-48ab-907a-5b69aefbb24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:48.746782   62207 system_pods.go:61] "metrics-server-6867b74b74-n276z" [69e25ed4-dc8e-4c68-955e-e7226d066ac4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:48.746790   62207 system_pods.go:61] "storage-provisioner" [41c92694-2d3a-4025-8e28-ddea7b9b9c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:09:48.746801   62207 system_pods.go:74] duration metric: took 12.315296ms to wait for pod list to return data ...
	I0914 18:09:48.746811   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:48.751399   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:48.751428   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:48.751440   62207 node_conditions.go:105] duration metric: took 4.625335ms to run NodePressure ...
	I0914 18:09:48.751461   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:49.051211   62207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057333   62207 kubeadm.go:739] kubelet initialised
	I0914 18:09:49.057366   62207 kubeadm.go:740] duration metric: took 6.124032ms waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057379   62207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:49.062570   62207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:51.069219   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:51.588043   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:54.082931   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.499759   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:52.502450   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.000767   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.069338   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:53.570290   62207 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:53.570323   62207 pod_ready.go:82] duration metric: took 4.507716999s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:53.570333   62207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:55.577317   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:56.581937   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:58.583632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:57.000913   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.001429   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:57.578803   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.576525   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:59.576547   62207 pod_ready.go:82] duration metric: took 6.006207927s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:59.576556   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084027   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.084054   62207 pod_ready.go:82] duration metric: took 507.490867ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084067   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089044   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.089068   62207 pod_ready.go:82] duration metric: took 4.991847ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089079   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093160   62207 pod_ready.go:93] pod "kube-proxy-lvp9h" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.093179   62207 pod_ready.go:82] duration metric: took 4.093257ms for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093198   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096786   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.096800   62207 pod_ready.go:82] duration metric: took 3.594996ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096807   62207 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:01.082601   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:03.581290   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:01.502864   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.001645   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.103118   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.103451   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.603049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.582900   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.083020   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.500670   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.500752   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:08.604603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.103035   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:10.581739   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:12.582523   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:14.582676   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.000857   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:13.000915   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.001474   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:13.103818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.602921   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.082737   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:19.082771   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.501947   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.000464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:18.102598   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.104053   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:21.085272   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:23.583185   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:22.001396   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.500424   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:22.602815   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.603230   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.604416   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.082291   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:28.582007   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.501088   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:29.001400   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:28.604725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.103833   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:30.582800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.082884   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.001447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.501525   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:33.603121   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.604573   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.083689   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:37.583721   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:36.000829   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:38.001022   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.002742   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:38.102910   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.103826   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.082907   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.083587   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:44.582457   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.500851   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.001069   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.603017   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.104603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.082010   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:49.082648   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.500917   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.001192   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:47.603610   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.104612   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.083246   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:53.582120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:52.001273   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:54.001308   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:52.603604   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.104734   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.582258   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.081905   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:56.002210   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.499961   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.604025   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.104193   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.083208   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.582299   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.583803   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.500596   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.501405   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.501527   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:02.602818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:05.103061   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:07.083161   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.585702   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.999885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.001611   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:07.603634   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.605513   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.082145   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.082616   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:11.500951   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.001238   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:12.103165   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.103338   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.103440   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.083173   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.582225   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.001344   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.501001   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:18.603529   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.607347   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.583176   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.082414   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.501464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.000161   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.000597   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:23.103266   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.104091   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.082804   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.582345   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.582482   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.001813   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.500751   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.105655   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.602893   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:32.082229   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.082649   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:31.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.001997   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:32.103194   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.103615   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.603139   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.083120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.582197   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.500663   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.501005   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:38.603823   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:41.102313   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.583132   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.082387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.501996   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.000447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:43.103756   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.082762   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.582353   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:49.584026   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.500451   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.501831   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.001778   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:47.605117   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.103023   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.082751   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.582016   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.501977   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.000564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.603110   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.603267   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:56.582121   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:58.583277   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:57.001624   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.501328   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:57.102934   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.103345   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.604125   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.083045   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:03.582885   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.501890   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:04.001023   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.103388   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.103725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.083003   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.581415   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.002052   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.500393   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:08.602880   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.604231   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.582647   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.082818   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.500953   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.001310   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.103254   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.103641   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.083238   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.582387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.501029   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.505028   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.001929   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:17.103996   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:19.603109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:21.603150   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.083547   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.586009   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.002367   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:24.500394   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:23.603351   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:26.103668   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.082824   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.582534   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.001617   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:29.002224   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:28.104044   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.602717   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.083027   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:32.083454   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:34.582698   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.002459   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:33.500924   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.603673   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.103528   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:37.081632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.082269   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.501391   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:38.000592   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:40.001479   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:37.602537   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.603422   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.082861   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:43.583680   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:42.002259   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.500927   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:42.103521   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.603744   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:46.082955   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:48.576914   62554 pod_ready.go:82] duration metric: took 4m0.000963266s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	E0914 18:12:48.576953   62554 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:12:48.576972   62554 pod_ready.go:39] duration metric: took 4m11.061091965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:12:48.576996   62554 kubeadm.go:597] duration metric: took 4m18.578277603s to restartPrimaryControlPlane
	W0914 18:12:48.577052   62554 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:48.577082   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:46.501278   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.001649   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.103834   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.104052   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.603479   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.500469   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:53.500885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.604109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.104639   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:55.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:57.501413   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:00.000020   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:12:58.604461   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:01.102988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:02.001195   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:04.001938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:03.603700   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.103294   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.501564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:09.002049   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:08.604408   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:11.103401   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:14.788734   62554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.2116254s)
	I0914 18:13:14.788816   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:14.810488   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:13:14.827773   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:13:14.846933   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:13:14.846958   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:13:14.847011   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:13:14.859886   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:13:14.859954   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:13:14.882400   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:13:14.896700   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:13:14.896779   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:13:14.908567   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.920718   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:13:14.920791   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.930849   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:13:14.940757   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:13:14.940829   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:13:14.950828   62554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:13:15.000219   62554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:13:15.000292   62554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:13:15.116662   62554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:13:15.116830   62554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:13:15.116937   62554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:13:15.128493   62554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:13:11.002219   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:13.500397   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.130231   62554 out.go:235]   - Generating certificates and keys ...
	I0914 18:13:15.130322   62554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:13:15.130412   62554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:13:15.130513   62554 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:13:15.130642   62554 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:13:15.130762   62554 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:13:15.130842   62554 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:13:15.130927   62554 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:13:15.131020   62554 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:13:15.131131   62554 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:13:15.131235   62554 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:13:15.131325   62554 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:13:15.131417   62554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:13:15.454691   62554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:13:15.653046   62554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:13:15.704029   62554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:13:15.846280   62554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:13:15.926881   62554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:13:15.927633   62554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:13:15.932596   62554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:13:13.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.603335   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.934499   62554 out.go:235]   - Booting up control plane ...
	I0914 18:13:15.934626   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:13:15.934761   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:13:15.934913   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:13:15.952982   62554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:13:15.961449   62554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:13:15.961526   62554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:13:16.102126   62554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:13:16.102335   62554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:13:16.604217   62554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.082287ms
	I0914 18:13:16.604330   62554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:13:15.501231   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:17.501427   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:19.501641   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.609408   62554 kubeadm.go:310] [api-check] The API server is healthy after 5.002255971s
	I0914 18:13:21.622798   62554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:13:21.637103   62554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:13:21.676498   62554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:13:21.676739   62554 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-044534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:13:21.697522   62554 kubeadm.go:310] [bootstrap-token] Using token: oo4rrp.xx4py1wjxiu1i6la
	I0914 18:13:17.604060   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:20.103115   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.699311   62554 out.go:235]   - Configuring RBAC rules ...
	I0914 18:13:21.699462   62554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:13:21.711614   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:13:21.721449   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:13:21.727812   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:13:21.733486   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:13:21.747521   62554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:13:22.014670   62554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:13:22.463865   62554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:13:23.016165   62554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:13:23.016195   62554 kubeadm.go:310] 
	I0914 18:13:23.016257   62554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:13:23.016265   62554 kubeadm.go:310] 
	I0914 18:13:23.016385   62554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:13:23.016415   62554 kubeadm.go:310] 
	I0914 18:13:23.016456   62554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:13:23.016542   62554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:13:23.016627   62554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:13:23.016637   62554 kubeadm.go:310] 
	I0914 18:13:23.016753   62554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:13:23.016778   62554 kubeadm.go:310] 
	I0914 18:13:23.016850   62554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:13:23.016860   62554 kubeadm.go:310] 
	I0914 18:13:23.016937   62554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:13:23.017051   62554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:13:23.017142   62554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:13:23.017156   62554 kubeadm.go:310] 
	I0914 18:13:23.017284   62554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:13:23.017403   62554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:13:23.017419   62554 kubeadm.go:310] 
	I0914 18:13:23.017533   62554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.017664   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:13:23.017700   62554 kubeadm.go:310] 	--control-plane 
	I0914 18:13:23.017710   62554 kubeadm.go:310] 
	I0914 18:13:23.017821   62554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:13:23.017832   62554 kubeadm.go:310] 
	I0914 18:13:23.017944   62554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.018104   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:13:23.019098   62554 kubeadm.go:310] W0914 18:13:14.968906    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019512   62554 kubeadm.go:310] W0914 18:13:14.970621    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019672   62554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:13:23.019690   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:13:23.019704   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:13:23.021459   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:13:23.022517   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:13:23.037352   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:13:23.062037   62554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:13:23.062132   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.062202   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044534 minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=embed-certs-044534 minikube.k8s.io/primary=true
	I0914 18:13:23.089789   62554 ops.go:34] apiserver oom_adj: -16
	I0914 18:13:23.246478   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.747419   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.247388   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.746913   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:21.502222   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.001757   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:25.247445   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:25.747417   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.247440   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.747262   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.847454   62554 kubeadm.go:1113] duration metric: took 3.78538549s to wait for elevateKubeSystemPrivileges
	I0914 18:13:26.847496   62554 kubeadm.go:394] duration metric: took 4m56.896825398s to StartCluster
	I0914 18:13:26.847521   62554 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.847618   62554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:13:26.850148   62554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.850488   62554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:13:26.850562   62554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:13:26.850672   62554 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-044534"
	I0914 18:13:26.850690   62554 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-044534"
	W0914 18:13:26.850703   62554 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:13:26.850715   62554 addons.go:69] Setting default-storageclass=true in profile "embed-certs-044534"
	I0914 18:13:26.850734   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.850753   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:13:26.850752   62554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044534"
	I0914 18:13:26.850716   62554 addons.go:69] Setting metrics-server=true in profile "embed-certs-044534"
	I0914 18:13:26.850844   62554 addons.go:234] Setting addon metrics-server=true in "embed-certs-044534"
	W0914 18:13:26.850860   62554 addons.go:243] addon metrics-server should already be in state true
	I0914 18:13:26.850898   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.851174   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851204   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851214   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851235   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851250   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851273   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.852030   62554 out.go:177] * Verifying Kubernetes components...
	I0914 18:13:26.853580   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:13:26.868084   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0914 18:13:26.868135   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0914 18:13:26.868700   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.868787   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.869251   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869282   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.869637   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.869650   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869714   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.870039   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.870232   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.870396   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.870454   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.871718   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0914 18:13:26.872337   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.872842   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.872870   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.873227   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.873942   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.873989   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.874235   62554 addons.go:234] Setting addon default-storageclass=true in "embed-certs-044534"
	W0914 18:13:26.874257   62554 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:13:26.874287   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.874674   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.874721   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.887685   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0914 18:13:26.888211   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.888735   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.888753   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.889060   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.889233   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.891040   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.892012   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0914 18:13:26.892352   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.892798   62554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:13:26.892812   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.892845   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.893321   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.893987   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.894040   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.894059   62554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:26.894078   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:13:26.894102   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.897218   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0914 18:13:26.897776   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.897932   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.898631   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.898669   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.899315   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.899382   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.899395   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.899557   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.899698   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.899873   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.900433   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.900668   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.902863   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.904569   62554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:13:22.104620   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.603793   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.604247   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.905708   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:13:26.905729   62554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:13:26.905755   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.910848   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911333   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.911430   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911568   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.911840   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.912025   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.912238   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.912625   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0914 18:13:26.913014   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.913653   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.913668   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.914116   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.914342   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.916119   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.916332   62554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:26.916350   62554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:13:26.916369   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.920129   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920769   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.920791   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920971   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.921170   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.921291   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.921413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:27.055184   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:13:27.072683   62554 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084289   62554 node_ready.go:49] node "embed-certs-044534" has status "Ready":"True"
	I0914 18:13:27.084317   62554 node_ready.go:38] duration metric: took 11.599354ms for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084326   62554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:27.090428   62554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:27.258854   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:27.260576   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:27.261092   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:13:27.261115   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:13:27.332882   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:13:27.332914   62554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:13:27.400159   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:27.400193   62554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:13:27.486731   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:28.164139   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164171   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164215   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164242   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164581   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164593   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164596   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164597   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164608   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164569   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164619   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164621   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164627   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164629   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164874   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164897   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164902   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164929   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164941   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196171   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.196197   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.196530   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.196590   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.509915   62554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023114908s)
	I0914 18:13:28.509973   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.509989   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510276   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510329   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510348   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510365   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.510374   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510614   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510653   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510665   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510678   62554 addons.go:475] Verifying addon metrics-server=true in "embed-certs-044534"
	I0914 18:13:28.512283   62554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:13:28.513593   62554 addons.go:510] duration metric: took 1.663035459s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:13:29.103964   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.501135   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.502181   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.605176   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.102817   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.596452   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:33.596699   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.001070   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:32.001946   63448 pod_ready.go:82] duration metric: took 4m0.00767403s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:13:32.001975   63448 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:13:32.001987   63448 pod_ready.go:39] duration metric: took 4m5.051544016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:32.002004   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:32.002037   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:32.002093   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:32.053241   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.053276   63448 cri.go:89] found id: ""
	I0914 18:13:32.053287   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:32.053349   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.057854   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:32.057921   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:32.099294   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:32.099318   63448 cri.go:89] found id: ""
	I0914 18:13:32.099328   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:32.099375   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.103674   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:32.103745   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:32.144190   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:32.144219   63448 cri.go:89] found id: ""
	I0914 18:13:32.144228   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:32.144275   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.148382   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:32.148443   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:32.185779   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:32.185807   63448 cri.go:89] found id: ""
	I0914 18:13:32.185814   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:32.185864   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.189478   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:32.189545   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:32.224657   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.224681   63448 cri.go:89] found id: ""
	I0914 18:13:32.224690   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:32.224745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.228421   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:32.228494   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:32.262491   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:32.262513   63448 cri.go:89] found id: ""
	I0914 18:13:32.262519   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:32.262579   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.266135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:32.266213   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:32.300085   63448 cri.go:89] found id: ""
	I0914 18:13:32.300111   63448 logs.go:276] 0 containers: []
	W0914 18:13:32.300119   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:32.300124   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:32.300181   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:32.335359   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:32.335379   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.335387   63448 cri.go:89] found id: ""
	I0914 18:13:32.335393   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:32.335451   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.339404   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.343173   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:32.343203   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.378987   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:32.379016   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.418829   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:32.418855   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:32.941046   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:32.941102   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.998148   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:32.998209   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:33.041208   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:33.041241   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:33.080774   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:33.080806   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:33.130519   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:33.130552   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:33.182751   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:33.182788   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:33.222008   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:33.222053   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:33.263100   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:33.263137   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:33.330307   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:33.330343   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:33.344658   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:33.344687   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:35.597157   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:35.597179   62554 pod_ready.go:82] duration metric: took 8.50672651s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:35.597189   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604147   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.604179   62554 pod_ready.go:82] duration metric: took 1.006982094s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604192   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610278   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.610302   62554 pod_ready.go:82] duration metric: took 6.101843ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610315   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615527   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.615549   62554 pod_ready.go:82] duration metric: took 5.226206ms for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615559   62554 pod_ready.go:39] duration metric: took 9.531222215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:36.615587   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:36.615642   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.630381   62554 api_server.go:72] duration metric: took 9.779851335s to wait for apiserver process to appear ...
	I0914 18:13:36.630414   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.630438   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:13:36.637559   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:13:36.639973   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:36.639999   62554 api_server.go:131] duration metric: took 9.577574ms to wait for apiserver health ...
	I0914 18:13:36.640006   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:36.647412   62554 system_pods.go:59] 9 kube-system pods found
	I0914 18:13:36.647443   62554 system_pods.go:61] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.647448   62554 system_pods.go:61] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.647452   62554 system_pods.go:61] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.647456   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.647459   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.647463   62554 system_pods.go:61] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.647465   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.647471   62554 system_pods.go:61] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.647475   62554 system_pods.go:61] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.647483   62554 system_pods.go:74] duration metric: took 7.47115ms to wait for pod list to return data ...
	I0914 18:13:36.647490   62554 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:36.650678   62554 default_sa.go:45] found service account: "default"
	I0914 18:13:36.650722   62554 default_sa.go:55] duration metric: took 3.225438ms for default service account to be created ...
	I0914 18:13:36.650733   62554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:36.656461   62554 system_pods.go:86] 9 kube-system pods found
	I0914 18:13:36.656489   62554 system_pods.go:89] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.656495   62554 system_pods.go:89] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.656499   62554 system_pods.go:89] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.656503   62554 system_pods.go:89] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.656507   62554 system_pods.go:89] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.656512   62554 system_pods.go:89] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.656516   62554 system_pods.go:89] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.656522   62554 system_pods.go:89] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.656525   62554 system_pods.go:89] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.656534   62554 system_pods.go:126] duration metric: took 5.795433ms to wait for k8s-apps to be running ...
	I0914 18:13:36.656541   62554 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:36.656586   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:36.673166   62554 system_svc.go:56] duration metric: took 16.609444ms WaitForService to wait for kubelet
	I0914 18:13:36.673205   62554 kubeadm.go:582] duration metric: took 9.822681909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:36.673227   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:36.794984   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:36.795013   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:36.795024   62554 node_conditions.go:105] duration metric: took 121.79122ms to run NodePressure ...
	I0914 18:13:36.795038   62554 start.go:241] waiting for startup goroutines ...
	I0914 18:13:36.795047   62554 start.go:246] waiting for cluster config update ...
	I0914 18:13:36.795060   62554 start.go:255] writing updated cluster config ...
	I0914 18:13:36.795406   62554 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:36.847454   62554 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:36.849605   62554 out.go:177] * Done! kubectl is now configured to use "embed-certs-044534" cluster and "default" namespace by default
	I0914 18:13:33.105197   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.604458   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.989800   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.006371   63448 api_server.go:72] duration metric: took 4m14.310539233s to wait for apiserver process to appear ...
	I0914 18:13:36.006405   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.006446   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:36.006508   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:36.044973   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:36.044992   63448 cri.go:89] found id: ""
	I0914 18:13:36.045000   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:36.045055   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.049371   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:36.049449   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:36.097114   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.097139   63448 cri.go:89] found id: ""
	I0914 18:13:36.097148   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:36.097212   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.102084   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:36.102153   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:36.140640   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.140662   63448 cri.go:89] found id: ""
	I0914 18:13:36.140671   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:36.140728   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.144624   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:36.144696   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:36.179135   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.179156   63448 cri.go:89] found id: ""
	I0914 18:13:36.179163   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:36.179216   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.183050   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:36.183110   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:36.222739   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:36.222758   63448 cri.go:89] found id: ""
	I0914 18:13:36.222765   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:36.222812   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.226715   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:36.226782   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:36.261587   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:36.261610   63448 cri.go:89] found id: ""
	I0914 18:13:36.261617   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:36.261664   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.265541   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:36.265614   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:36.301521   63448 cri.go:89] found id: ""
	I0914 18:13:36.301546   63448 logs.go:276] 0 containers: []
	W0914 18:13:36.301554   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:36.301560   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:36.301622   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:36.335332   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.335355   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.335358   63448 cri.go:89] found id: ""
	I0914 18:13:36.335365   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:36.335415   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.339542   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.343543   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:36.343570   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.384224   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:36.384259   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.428010   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:36.428041   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.469679   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:36.469708   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.507570   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:36.507597   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.543300   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:36.543335   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:36.619060   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:36.619084   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:36.633542   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:36.633572   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:36.741334   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:36.741370   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:37.231208   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:37.231255   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:37.278835   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:37.278863   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:37.320359   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:37.320399   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:37.357940   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:37.357974   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:39.913586   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:13:39.917590   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:13:39.918633   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:39.918653   63448 api_server.go:131] duration metric: took 3.912241678s to wait for apiserver health ...
	I0914 18:13:39.918660   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:39.918682   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:39.918727   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:39.961919   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:39.961947   63448 cri.go:89] found id: ""
	I0914 18:13:39.961956   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:39.962012   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:39.965756   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:39.965838   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:40.008044   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.008066   63448 cri.go:89] found id: ""
	I0914 18:13:40.008074   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:40.008117   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.012505   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:40.012569   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:40.059166   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.059194   63448 cri.go:89] found id: ""
	I0914 18:13:40.059204   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:40.059267   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.063135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:40.063197   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:40.105220   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.105245   63448 cri.go:89] found id: ""
	I0914 18:13:40.105255   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:40.105308   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.109907   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:40.109978   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:40.146307   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.146337   63448 cri.go:89] found id: ""
	I0914 18:13:40.146349   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:40.146396   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.150369   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:40.150436   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:40.185274   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.185301   63448 cri.go:89] found id: ""
	I0914 18:13:40.185312   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:40.185374   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.189425   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:40.189499   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:40.223289   63448 cri.go:89] found id: ""
	I0914 18:13:40.223311   63448 logs.go:276] 0 containers: []
	W0914 18:13:40.223319   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:40.223324   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:40.223369   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:40.257779   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.257805   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.257811   63448 cri.go:89] found id: ""
	I0914 18:13:40.257820   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:40.257880   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.262388   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.266233   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:40.266258   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:38.105234   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.604049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.310145   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:40.310188   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.358651   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:40.358686   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.398107   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:40.398144   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.450540   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:40.450573   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:40.465987   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:40.466013   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:40.573299   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:40.573333   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.618201   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:40.618247   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.671259   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:40.671304   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.708455   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:40.708488   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.746662   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:40.746696   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:41.108968   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:41.109017   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:41.150925   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:41.150968   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:43.725606   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:13:43.725642   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.725650   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.725656   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.725661   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.725665   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.725670   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.725680   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.725687   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.725699   63448 system_pods.go:74] duration metric: took 3.807031642s to wait for pod list to return data ...
	I0914 18:13:43.725710   63448 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:43.728384   63448 default_sa.go:45] found service account: "default"
	I0914 18:13:43.728409   63448 default_sa.go:55] duration metric: took 2.691817ms for default service account to be created ...
	I0914 18:13:43.728417   63448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:43.732884   63448 system_pods.go:86] 8 kube-system pods found
	I0914 18:13:43.732913   63448 system_pods.go:89] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.732918   63448 system_pods.go:89] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.732922   63448 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.732926   63448 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.732931   63448 system_pods.go:89] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.732935   63448 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.732942   63448 system_pods.go:89] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.732947   63448 system_pods.go:89] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.732954   63448 system_pods.go:126] duration metric: took 4.531761ms to wait for k8s-apps to be running ...
	I0914 18:13:43.732960   63448 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:43.733001   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:43.749535   63448 system_svc.go:56] duration metric: took 16.566498ms WaitForService to wait for kubelet
	I0914 18:13:43.749567   63448 kubeadm.go:582] duration metric: took 4m22.053742257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:43.749587   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:43.752493   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:43.752514   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:43.752523   63448 node_conditions.go:105] duration metric: took 2.931821ms to run NodePressure ...
	I0914 18:13:43.752534   63448 start.go:241] waiting for startup goroutines ...
	I0914 18:13:43.752548   63448 start.go:246] waiting for cluster config update ...
	I0914 18:13:43.752560   63448 start.go:255] writing updated cluster config ...
	I0914 18:13:43.752815   63448 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:43.803181   63448 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:43.805150   63448 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-243449" cluster and "default" namespace by default
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.103780   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:45.603666   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:47.603988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:50.104811   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:52.604411   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:55.103339   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:57.103716   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:59.603423   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:00.097180   62207 pod_ready.go:82] duration metric: took 4m0.000345486s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	E0914 18:14:00.097209   62207 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:14:00.097230   62207 pod_ready.go:39] duration metric: took 4m11.039838973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:00.097260   62207 kubeadm.go:597] duration metric: took 4m18.345876583s to restartPrimaryControlPlane
	W0914 18:14:00.097328   62207 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:14:00.097360   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:26.392001   62207 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.294613232s)
	I0914 18:14:26.392082   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:26.410558   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:14:26.421178   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:26.430786   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:26.430808   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:26.430858   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:26.440193   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:26.440253   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:26.449848   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:26.459589   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:26.459651   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:26.469556   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.478722   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:26.478782   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.488694   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:26.498478   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:26.498542   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:26.509455   62207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:26.552295   62207 kubeadm.go:310] W0914 18:14:26.530603    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.552908   62207 kubeadm.go:310] W0914 18:14:26.531307    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.665962   62207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:35.406074   62207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:14:35.406150   62207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:35.406251   62207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:35.406372   62207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:35.406503   62207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:14:35.406611   62207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:35.408167   62207 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:35.408257   62207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:35.408337   62207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:35.408451   62207 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:35.408550   62207 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:35.408655   62207 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:35.408733   62207 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:35.408823   62207 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:35.408916   62207 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:35.409022   62207 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:35.409133   62207 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:35.409176   62207 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:35.409225   62207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:35.409269   62207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:35.409328   62207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:14:35.409374   62207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:35.409440   62207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:35.409507   62207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:35.409633   62207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:35.409734   62207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:35.411984   62207 out.go:235]   - Booting up control plane ...
	I0914 18:14:35.412099   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:35.412212   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:35.412276   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:35.412371   62207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:35.412444   62207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:35.412479   62207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:35.412597   62207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:14:35.412686   62207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:14:35.412737   62207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002422188s
	I0914 18:14:35.412801   62207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:14:35.412863   62207 kubeadm.go:310] [api-check] The API server is healthy after 5.002046359s
	I0914 18:14:35.412986   62207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:14:35.413129   62207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:14:35.413208   62207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:14:35.413427   62207 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-168587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:14:35.413510   62207 kubeadm.go:310] [bootstrap-token] Using token: 2jk8ol.l80z6l7tm2nt4pl7
	I0914 18:14:35.414838   62207 out.go:235]   - Configuring RBAC rules ...
	I0914 18:14:35.414968   62207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:14:35.415069   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:14:35.415291   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:14:35.415482   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:14:35.415615   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:14:35.415725   62207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:14:35.415867   62207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:14:35.415930   62207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:14:35.415990   62207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:14:35.415999   62207 kubeadm.go:310] 
	I0914 18:14:35.416077   62207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:14:35.416086   62207 kubeadm.go:310] 
	I0914 18:14:35.416187   62207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:14:35.416198   62207 kubeadm.go:310] 
	I0914 18:14:35.416232   62207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:14:35.416314   62207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:14:35.416388   62207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:14:35.416397   62207 kubeadm.go:310] 
	I0914 18:14:35.416474   62207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:14:35.416484   62207 kubeadm.go:310] 
	I0914 18:14:35.416525   62207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:14:35.416529   62207 kubeadm.go:310] 
	I0914 18:14:35.416597   62207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:14:35.416701   62207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:14:35.416781   62207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:14:35.416796   62207 kubeadm.go:310] 
	I0914 18:14:35.416899   62207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:14:35.416998   62207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:14:35.417007   62207 kubeadm.go:310] 
	I0914 18:14:35.417125   62207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417247   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:14:35.417272   62207 kubeadm.go:310] 	--control-plane 
	I0914 18:14:35.417276   62207 kubeadm.go:310] 
	I0914 18:14:35.417399   62207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:14:35.417422   62207 kubeadm.go:310] 
	I0914 18:14:35.417530   62207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417686   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:14:35.417705   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:14:35.417713   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:14:35.420023   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:14:35.421095   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:14:35.432619   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:14:35.451720   62207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:14:35.451790   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:35.451836   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-168587 minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=no-preload-168587 minikube.k8s.io/primary=true
	I0914 18:14:35.654681   62207 ops.go:34] apiserver oom_adj: -16
	I0914 18:14:35.654714   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.155376   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.655468   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.155741   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.655416   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.154935   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.655465   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.740860   62207 kubeadm.go:1113] duration metric: took 3.289121705s to wait for elevateKubeSystemPrivileges
	I0914 18:14:38.740912   62207 kubeadm.go:394] duration metric: took 4m57.036377829s to StartCluster
	I0914 18:14:38.740939   62207 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.741029   62207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:14:38.742754   62207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.742977   62207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:14:38.743138   62207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:14:38.743260   62207 addons.go:69] Setting storage-provisioner=true in profile "no-preload-168587"
	I0914 18:14:38.743271   62207 addons.go:69] Setting default-storageclass=true in profile "no-preload-168587"
	I0914 18:14:38.743282   62207 addons.go:234] Setting addon storage-provisioner=true in "no-preload-168587"
	I0914 18:14:38.743290   62207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-168587"
	W0914 18:14:38.743295   62207 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:14:38.743306   62207 addons.go:69] Setting metrics-server=true in profile "no-preload-168587"
	I0914 18:14:38.743329   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743334   62207 addons.go:234] Setting addon metrics-server=true in "no-preload-168587"
	I0914 18:14:38.743362   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0914 18:14:38.743365   62207 addons.go:243] addon metrics-server should already be in state true
	I0914 18:14:38.743442   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743814   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743843   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743821   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.744070   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.744427   62207 out.go:177] * Verifying Kubernetes components...
	I0914 18:14:38.745716   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:14:38.760250   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0914 18:14:38.760329   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0914 18:14:38.760788   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.760810   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.761416   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761438   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761581   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761829   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.761980   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.762333   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.762445   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.762495   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.763295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0914 18:14:38.763767   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.764256   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.764285   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.764616   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.765095   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765131   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.765525   62207 addons.go:234] Setting addon default-storageclass=true in "no-preload-168587"
	W0914 18:14:38.765544   62207 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:14:38.765568   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.765798   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765837   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.782208   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0914 18:14:38.782527   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0914 18:14:38.782564   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0914 18:14:38.782679   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782943   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782973   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.783413   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783433   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783566   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783573   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783585   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783956   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.783964   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784444   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.784482   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.784639   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784666   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.784806   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.786340   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.786797   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.788188   62207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:14:38.788195   62207 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:14:38.789239   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:14:38.789254   62207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:14:38.789273   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.789338   62207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:38.789347   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:14:38.789358   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.792968   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793521   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793853   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.793894   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794037   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794097   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.794107   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794258   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794351   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794499   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794531   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794635   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794716   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.794777   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.827254   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 18:14:38.827852   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.828434   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.828460   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.828837   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.829088   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.830820   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.831031   62207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:38.831048   62207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:14:38.831067   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.833822   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834242   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.834282   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834453   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.834641   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.834794   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.834963   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.920627   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:14:38.941951   62207 node_ready.go:35] waiting up to 6m0s for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973102   62207 node_ready.go:49] node "no-preload-168587" has status "Ready":"True"
	I0914 18:14:38.973124   62207 node_ready.go:38] duration metric: took 31.146661ms for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973132   62207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:38.989712   62207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:39.018196   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:14:39.018223   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:14:39.045691   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:39.066249   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:14:39.066277   62207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:14:39.073017   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:39.118360   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.118385   62207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:14:39.195268   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.874924   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.874953   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.874950   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875004   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875398   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875406   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875457   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875466   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875476   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875406   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875430   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875598   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875609   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875631   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875914   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875916   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875934   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875939   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875959   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875966   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.929860   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.929881   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.930191   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.930211   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.139888   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.139918   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140256   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140273   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140282   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.140289   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140608   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140620   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:40.140630   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140646   62207 addons.go:475] Verifying addon metrics-server=true in "no-preload-168587"
	I0914 18:14:40.142461   62207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:14:40.143818   62207 addons.go:510] duration metric: took 1.400695696s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:14:40.996599   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:43.498584   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:45.995938   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:45.995971   62207 pod_ready.go:82] duration metric: took 7.006220602s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:45.995984   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000589   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.000609   62207 pod_ready.go:82] duration metric: took 4.618617ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000620   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004865   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.004886   62207 pod_ready.go:82] duration metric: took 4.259787ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004895   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009225   62207 pod_ready.go:93] pod "kube-proxy-xdj6b" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.009243   62207 pod_ready.go:82] duration metric: took 4.343161ms for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009250   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013312   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.013330   62207 pod_ready.go:82] duration metric: took 4.073817ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013337   62207 pod_ready.go:39] duration metric: took 7.040196066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:46.013358   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:14:46.013403   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:14:46.029881   62207 api_server.go:72] duration metric: took 7.286871398s to wait for apiserver process to appear ...
	I0914 18:14:46.029912   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:14:46.029937   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:14:46.034236   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:14:46.035287   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:14:46.035305   62207 api_server.go:131] duration metric: took 5.385499ms to wait for apiserver health ...
	I0914 18:14:46.035314   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:14:46.196765   62207 system_pods.go:59] 9 kube-system pods found
	I0914 18:14:46.196796   62207 system_pods.go:61] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196804   62207 system_pods.go:61] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196810   62207 system_pods.go:61] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.196816   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.196821   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.196824   62207 system_pods.go:61] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.196827   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.196832   62207 system_pods.go:61] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.196835   62207 system_pods.go:61] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.196842   62207 system_pods.go:74] duration metric: took 161.510322ms to wait for pod list to return data ...
	I0914 18:14:46.196853   62207 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:14:46.394399   62207 default_sa.go:45] found service account: "default"
	I0914 18:14:46.394428   62207 default_sa.go:55] duration metric: took 197.566762ms for default service account to be created ...
	I0914 18:14:46.394443   62207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:14:46.596421   62207 system_pods.go:86] 9 kube-system pods found
	I0914 18:14:46.596454   62207 system_pods.go:89] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596462   62207 system_pods.go:89] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596468   62207 system_pods.go:89] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.596473   62207 system_pods.go:89] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.596477   62207 system_pods.go:89] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.596480   62207 system_pods.go:89] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.596483   62207 system_pods.go:89] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.596502   62207 system_pods.go:89] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.596509   62207 system_pods.go:89] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.596517   62207 system_pods.go:126] duration metric: took 202.067078ms to wait for k8s-apps to be running ...
	I0914 18:14:46.596527   62207 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:14:46.596571   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:46.611796   62207 system_svc.go:56] duration metric: took 15.259464ms WaitForService to wait for kubelet
	I0914 18:14:46.611837   62207 kubeadm.go:582] duration metric: took 7.868833105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:14:46.611858   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:14:46.794731   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:14:46.794758   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:14:46.794767   62207 node_conditions.go:105] duration metric: took 182.903835ms to run NodePressure ...
	I0914 18:14:46.794777   62207 start.go:241] waiting for startup goroutines ...
	I0914 18:14:46.794783   62207 start.go:246] waiting for cluster config update ...
	I0914 18:14:46.794793   62207 start.go:255] writing updated cluster config ...
	I0914 18:14:46.795051   62207 ssh_runner.go:195] Run: rm -f paused
	I0914 18:14:46.845803   62207 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:14:46.847399   62207 out.go:177] * Done! kubectl is now configured to use "no-preload-168587" cluster and "default" namespace by default
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 18:23:48 no-preload-168587 crio[707]: time="2024-09-14 18:23:48.966379386Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338228966357807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a0fdb13-cbd5-4c11-add6-8c388fb07749 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:48 no-preload-168587 crio[707]: time="2024-09-14 18:23:48.967126173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c6ac5b7-27db-4927-b883-f21824e73f0b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:48 no-preload-168587 crio[707]: time="2024-09-14 18:23:48.967183169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c6ac5b7-27db-4927-b883-f21824e73f0b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:48 no-preload-168587 crio[707]: time="2024-09-14 18:23:48.967394087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c6ac5b7-27db-4927-b883-f21824e73f0b name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.004894568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2363cf31-619c-4350-bb8f-dfa2fc5e8f16 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.004982704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2363cf31-619c-4350-bb8f-dfa2fc5e8f16 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.006228187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fe144e2-adb1-40cd-a65a-13f396da6e8f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.006575790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338229006552941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fe144e2-adb1-40cd-a65a-13f396da6e8f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.007453708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16cf3aa8-2fea-4e7c-947f-dcafcf216716 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.007525715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16cf3aa8-2fea-4e7c-947f-dcafcf216716 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.007727611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16cf3aa8-2fea-4e7c-947f-dcafcf216716 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.042605868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a15f67a-db14-4b0b-9a6f-0f03d0f28b0e name=/runtime.v1.RuntimeService/Version
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.042676240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a15f67a-db14-4b0b-9a6f-0f03d0f28b0e name=/runtime.v1.RuntimeService/Version
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.043775627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f81e697-a874-4eb9-84aa-f156f2b87d8f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.044326720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338229044302357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f81e697-a874-4eb9-84aa-f156f2b87d8f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.044759466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac1624d6-5b52-4d19-9622-be94c79251e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.044860857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac1624d6-5b52-4d19-9622-be94c79251e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.045096430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac1624d6-5b52-4d19-9622-be94c79251e5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.078317136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f930859c-7382-4471-b76f-6ff908aed400 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.078403470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f930859c-7382-4471-b76f-6ff908aed400 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.080014799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=861297fc-5427-4409-85ef-d4e5d261f4b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.080471779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338229080444081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=861297fc-5427-4409-85ef-d4e5d261f4b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.080998871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39b56619-b856-4ef1-b855-230d3b1c68e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.081095802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39b56619-b856-4ef1-b855-230d3b1c68e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:23:49 no-preload-168587 crio[707]: time="2024-09-14 18:23:49.081294643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39b56619-b856-4ef1-b855-230d3b1c68e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cfaed3fc943fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   8acc590924839       storage-provisioner
	2f9d600e4a1dd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   1d79c7b4b2a16       coredns-7c65d6cfc9-qrgr9
	95a7091fc8692       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a23867fe0fc27       coredns-7c65d6cfc9-nzpdb
	feceee2bacff4       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   ef42eca304065       kube-proxy-xdj6b
	1e840de5726f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   340d402dc2fd0       etcd-no-preload-168587
	5ce526810d6c8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   71c69668af05f       kube-apiserver-no-preload-168587
	0197ffbc2979d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   a3c494d523c19       kube-scheduler-no-preload-168587
	f5ee6161f59dd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   1a3dc35d32452       kube-controller-manager-no-preload-168587
	daf75a8555098       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   29e5c57da77a7       kube-apiserver-no-preload-168587
	
	
	==> coredns [2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-168587
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-168587
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=no-preload-168587
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:14:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-168587
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:23:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:19:51 +0000   Sat, 14 Sep 2024 18:14:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:19:51 +0000   Sat, 14 Sep 2024 18:14:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:19:51 +0000   Sat, 14 Sep 2024 18:14:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:19:51 +0000   Sat, 14 Sep 2024 18:14:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    no-preload-168587
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdba1f0c25954cbfa58478c74a6c95ca
	  System UUID:                fdba1f0c-2595-4cbf-a584-78c74a6c95ca
	  Boot ID:                    de44ce6f-ef46-437b-b02c-11b6fc1227ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nzpdb                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-qrgr9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-no-preload-168587                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-no-preload-168587             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-no-preload-168587    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-xdj6b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-168587             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-cmcz4              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node no-preload-168587 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node no-preload-168587 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node no-preload-168587 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node no-preload-168587 event: Registered Node no-preload-168587 in Controller
	
	
	==> dmesg <==
	[  +0.037593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.958316] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.912642] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.462754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.346974] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.061166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064543] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.225904] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.135807] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.283852] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[ +15.300876] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.064189] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.675224] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +3.939778] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.200936] kauditd_printk_skb: 57 callbacks suppressed
	[Sep14 18:10] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 18:14] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.403616] systemd-fstab-generator[3004]: Ignoring "noauto" option for root device
	[  +4.857084] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.025689] systemd-fstab-generator[3326]: Ignoring "noauto" option for root device
	[  +4.346195] systemd-fstab-generator[3428]: Ignoring "noauto" option for root device
	[  +0.094058] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.743720] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5] <==
	{"level":"info","ts":"2024-09-14T18:14:29.408204Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T18:14:29.410194Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2024-09-14T18:14:29.410225Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2024-09-14T18:14:29.410677Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"38b26e584d45e0da","initial-advertise-peer-urls":["https://192.168.39.38:2380"],"listen-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T18:14:29.410781Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T18:14:30.151170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T18:14:30.151231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T18:14:30.151264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 1"}
	{"level":"info","ts":"2024-09-14T18:14:30.151284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T18:14:30.151292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2024-09-14T18:14:30.151300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 2"}
	{"level":"info","ts":"2024-09-14T18:14:30.151308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2024-09-14T18:14:30.152658Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:14:30.153802Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:no-preload-168587 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T18:14:30.154588Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:14:30.154681Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:14:30.154722Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:14:30.154733Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:14:30.155244Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:14:30.156085Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:14:30.156780Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T18:14:30.156887Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T18:14:30.156913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T18:14:30.157575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:14:30.158321Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	
	
	==> kernel <==
	 18:23:49 up 14 min,  0 users,  load average: 0.26, 0.23, 0.18
	Linux no-preload-168587 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2] <==
	W0914 18:19:32.682677       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:19:32.682866       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:19:32.683816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:19:32.683900       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:20:32.684797       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:20:32.685114       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:20:32.685190       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:20:32.685226       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:20:32.686357       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:20:32.686422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:22:32.687612       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:22:32.687788       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:22:32.687883       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:22:32.687898       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:22:32.688932       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:22:32.688996       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c] <==
	W0914 18:14:23.489006       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.498987       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.571245       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.587457       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.593977       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.625618       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.636275       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.643644       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.674729       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.729985       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.765244       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.771756       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.817563       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.861374       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.902500       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.903889       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.953608       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.993427       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.020306       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.057988       1 logging.go:55] [core] [Channel #16 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.282323       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.444894       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.629546       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.760708       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:25.938002       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf] <==
	E0914 18:18:38.676916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:18:39.126834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:19:08.686220       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:19:09.140867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:19:38.693626       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:19:39.147998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:19:51.728983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-168587"
	E0914 18:20:08.700594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:20:09.156888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:20:37.733313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="265.082µs"
	E0914 18:20:38.707629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:20:39.164942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:20:48.735140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="504.552µs"
	E0914 18:21:08.718528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:21:09.179970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:21:38.725266       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:21:39.189161       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:22:08.731872       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:22:09.201110       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:22:38.742141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:22:39.213825       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:23:08.750504       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:23:09.228921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:23:38.757470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:23:39.236748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 18:14:40.809480       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 18:14:40.823253       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.38"]
	E0914 18:14:40.823466       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 18:14:40.937946       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 18:14:40.937985       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 18:14:40.938008       1 server_linux.go:169] "Using iptables Proxier"
	I0914 18:14:40.950712       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 18:14:40.954331       1 server.go:483] "Version info" version="v1.31.1"
	I0914 18:14:40.954433       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:14:40.956875       1 config.go:199] "Starting service config controller"
	I0914 18:14:40.956981       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 18:14:40.957087       1 config.go:105] "Starting endpoint slice config controller"
	I0914 18:14:40.957121       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 18:14:40.957908       1 config.go:328] "Starting node config controller"
	I0914 18:14:40.957952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 18:14:41.057897       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 18:14:41.057957       1 shared_informer.go:320] Caches are synced for service config
	I0914 18:14:41.057982       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff] <==
	W0914 18:14:32.631580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:14:32.632344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.675625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:14:32.675738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.682387       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:14:32.683804       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 18:14:32.757403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:14:32.757536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.793006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:14:32.793161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.875307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:14:32.875470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.921891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:14:32.922089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.994100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 18:14:32.994216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.047411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:14:33.047559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.069314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:14:33.069417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.093371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:14:33.093468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.093547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:14:33.093605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0914 18:14:35.813371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 18:22:36 no-preload-168587 kubelet[3332]: E0914 18:22:36.716618    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:22:44 no-preload-168587 kubelet[3332]: E0914 18:22:44.916396    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338164915461631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:44 no-preload-168587 kubelet[3332]: E0914 18:22:44.917952    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338164915461631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:48 no-preload-168587 kubelet[3332]: E0914 18:22:48.715909    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:22:54 no-preload-168587 kubelet[3332]: E0914 18:22:54.919772    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338174919022726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:22:54 no-preload-168587 kubelet[3332]: E0914 18:22:54.920258    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338174919022726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:03 no-preload-168587 kubelet[3332]: E0914 18:23:03.715151    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:23:04 no-preload-168587 kubelet[3332]: E0914 18:23:04.922594    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338184922205967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:04 no-preload-168587 kubelet[3332]: E0914 18:23:04.923102    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338184922205967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:14 no-preload-168587 kubelet[3332]: E0914 18:23:14.924981    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338194924439599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:14 no-preload-168587 kubelet[3332]: E0914 18:23:14.925024    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338194924439599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:16 no-preload-168587 kubelet[3332]: E0914 18:23:16.717091    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:23:24 no-preload-168587 kubelet[3332]: E0914 18:23:24.927164    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338204926530763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:24 no-preload-168587 kubelet[3332]: E0914 18:23:24.927331    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338204926530763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:31 no-preload-168587 kubelet[3332]: E0914 18:23:31.714939    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]: E0914 18:23:34.743349    3332 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]: E0914 18:23:34.930612    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338214929915192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:34 no-preload-168587 kubelet[3332]: E0914 18:23:34.930705    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338214929915192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:44 no-preload-168587 kubelet[3332]: E0914 18:23:44.932710    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338224932288092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:44 no-preload-168587 kubelet[3332]: E0914 18:23:44.933147    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338224932288092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:23:46 no-preload-168587 kubelet[3332]: E0914 18:23:46.715686    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	
	
	==> storage-provisioner [cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5] <==
	I0914 18:14:40.704269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:14:40.729132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:14:40.729213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:14:40.749413       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:14:40.749739       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-168587_611d2d49-08a6-4397-8515-7b32453c843a!
	I0914 18:14:40.761012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af481965-643e-4ba6-8fdf-07b2d1db4d95", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-168587_611d2d49-08a6-4397-8515-7b32453c843a became leader
	I0914 18:14:40.850628       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-168587_611d2d49-08a6-4397-8515-7b32453c843a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-168587 -n no-preload-168587
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-168587 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cmcz4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-168587 describe pod metrics-server-6867b74b74-cmcz4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-168587 describe pod metrics-server-6867b74b74-cmcz4: exit status 1 (64.357016ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cmcz4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-168587 describe pod metrics-server-6867b74b74-cmcz4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
E0914 18:19:04.947344   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
E0914 18:21:45.625457   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
E0914 18:24:04.947324   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
E0914 18:24:48.698286   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (229.822148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-556121" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (232.507406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-556121 logs -n 25: (1.711239695s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-319416                              | stopped-upgrade-319416       | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:06:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:06:40.299903   63448 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:40.300039   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300049   63448 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:40.300054   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300240   63448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:06:40.300801   63448 out.go:352] Setting JSON to false
	I0914 18:06:40.301779   63448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6544,"bootTime":1726330656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:06:40.301879   63448 start.go:139] virtualization: kvm guest
	I0914 18:06:40.303963   63448 out.go:177] * [default-k8s-diff-port-243449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:06:40.305394   63448 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:06:40.305429   63448 notify.go:220] Checking for updates...
	I0914 18:06:40.308148   63448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:06:40.309226   63448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:06:40.310360   63448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:06:40.311509   63448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:06:40.312543   63448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:06:40.314418   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:06:40.315063   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.315154   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.330033   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0914 18:06:40.330502   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.331014   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.331035   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.331372   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.331519   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.331729   63448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:06:40.332043   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.332089   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.346598   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0914 18:06:40.347021   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.347501   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.347536   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.347863   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.348042   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.380416   63448 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:06:40.381578   63448 start.go:297] selected driver: kvm2
	I0914 18:06:40.381589   63448 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.381693   63448 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:06:40.382390   63448 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.382478   63448 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:06:40.397521   63448 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:06:40.397921   63448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:06:40.397959   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:06:40.398002   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:06:40.398040   63448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.398145   63448 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.399920   63448 out.go:177] * Starting "default-k8s-diff-port-243449" primary control-plane node in "default-k8s-diff-port-243449" cluster
	I0914 18:06:39.170425   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:40.400913   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:06:40.400954   63448 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:06:40.400966   63448 cache.go:56] Caching tarball of preloaded images
	I0914 18:06:40.401038   63448 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:06:40.401055   63448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:06:40.401185   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:06:40.401421   63448 start.go:360] acquireMachinesLock for default-k8s-diff-port-243449: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:06:45.250426   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:48.322531   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:54.402441   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:57.474440   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:03.554541   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:06.626472   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:12.706430   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:15.778448   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:21.858453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:24.930473   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:31.010432   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:34.082423   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:40.162417   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:43.234501   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:49.314533   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:52.386453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:58.466444   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:01.538476   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:04.546206   62554 start.go:364] duration metric: took 3m59.524513317s to acquireMachinesLock for "embed-certs-044534"
	I0914 18:08:04.546263   62554 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:04.546275   62554 fix.go:54] fixHost starting: 
	I0914 18:08:04.546585   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:04.546636   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:04.562182   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0914 18:08:04.562704   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:04.563264   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:08:04.563300   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:04.563714   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:04.563947   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:04.564131   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:08:04.566043   62554 fix.go:112] recreateIfNeeded on embed-certs-044534: state=Stopped err=<nil>
	I0914 18:08:04.566073   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	W0914 18:08:04.566289   62554 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:04.567993   62554 out.go:177] * Restarting existing kvm2 VM for "embed-certs-044534" ...
	I0914 18:08:04.570182   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Start
	I0914 18:08:04.570431   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring networks are active...
	I0914 18:08:04.571374   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network default is active
	I0914 18:08:04.571748   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network mk-embed-certs-044534 is active
	I0914 18:08:04.572124   62554 main.go:141] libmachine: (embed-certs-044534) Getting domain xml...
	I0914 18:08:04.572852   62554 main.go:141] libmachine: (embed-certs-044534) Creating domain...
	I0914 18:08:04.540924   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:04.540957   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541310   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:08:04.541335   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541586   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:08:04.546055   62207 machine.go:96] duration metric: took 4m34.63489942s to provisionDockerMachine
	I0914 18:08:04.546096   62207 fix.go:56] duration metric: took 4m34.662932355s for fixHost
	I0914 18:08:04.546102   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 4m34.66297244s
	W0914 18:08:04.546122   62207 start.go:714] error starting host: provision: host is not running
	W0914 18:08:04.546220   62207 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 18:08:04.546231   62207 start.go:729] Will try again in 5 seconds ...
	I0914 18:08:05.812076   62554 main.go:141] libmachine: (embed-certs-044534) Waiting to get IP...
	I0914 18:08:05.812955   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:05.813302   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:05.813380   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:05.813279   63779 retry.go:31] will retry after 298.8389ms: waiting for machine to come up
	I0914 18:08:06.114130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.114575   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.114604   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.114530   63779 retry.go:31] will retry after 359.694721ms: waiting for machine to come up
	I0914 18:08:06.476183   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.476801   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.476828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.476745   63779 retry.go:31] will retry after 425.650219ms: waiting for machine to come up
	I0914 18:08:06.904358   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.904794   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.904816   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.904749   63779 retry.go:31] will retry after 433.157325ms: waiting for machine to come up
	I0914 18:08:07.339139   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.339578   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.339602   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.339512   63779 retry.go:31] will retry after 547.817102ms: waiting for machine to come up
	I0914 18:08:07.889390   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.889888   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.889993   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.889820   63779 retry.go:31] will retry after 603.749753ms: waiting for machine to come up
	I0914 18:08:08.495673   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:08.496047   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:08.496076   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:08.495995   63779 retry.go:31] will retry after 831.027535ms: waiting for machine to come up
	I0914 18:08:09.329209   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:09.329622   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:09.329643   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:09.329591   63779 retry.go:31] will retry after 1.429850518s: waiting for machine to come up
	I0914 18:08:09.548738   62207 start.go:360] acquireMachinesLock for no-preload-168587: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:10.761510   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:10.761884   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:10.761915   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:10.761839   63779 retry.go:31] will retry after 1.146619754s: waiting for machine to come up
	I0914 18:08:11.910130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:11.910542   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:11.910568   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:11.910500   63779 retry.go:31] will retry after 1.582382319s: waiting for machine to come up
	I0914 18:08:13.495352   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:13.495852   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:13.495872   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:13.495808   63779 retry.go:31] will retry after 2.117717335s: waiting for machine to come up
	I0914 18:08:15.615461   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:15.615896   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:15.615918   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:15.615846   63779 retry.go:31] will retry after 3.071486865s: waiting for machine to come up
	I0914 18:08:18.691109   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:18.691572   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:18.691605   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:18.691513   63779 retry.go:31] will retry after 4.250544955s: waiting for machine to come up
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:22.946247   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946662   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has current primary IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946687   62554 main.go:141] libmachine: (embed-certs-044534) Found IP for machine: 192.168.50.126
	I0914 18:08:22.946700   62554 main.go:141] libmachine: (embed-certs-044534) Reserving static IP address...
	I0914 18:08:22.947052   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.947068   62554 main.go:141] libmachine: (embed-certs-044534) Reserved static IP address: 192.168.50.126
	I0914 18:08:22.947080   62554 main.go:141] libmachine: (embed-certs-044534) DBG | skip adding static IP to network mk-embed-certs-044534 - found existing host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"}
	I0914 18:08:22.947093   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Getting to WaitForSSH function...
	I0914 18:08:22.947108   62554 main.go:141] libmachine: (embed-certs-044534) Waiting for SSH to be available...
	I0914 18:08:22.949354   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949623   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.949645   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949798   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH client type: external
	I0914 18:08:22.949822   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa (-rw-------)
	I0914 18:08:22.949886   62554 main.go:141] libmachine: (embed-certs-044534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:22.949911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | About to run SSH command:
	I0914 18:08:22.949926   62554 main.go:141] libmachine: (embed-certs-044534) DBG | exit 0
	I0914 18:08:23.074248   62554 main.go:141] libmachine: (embed-certs-044534) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:23.074559   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetConfigRaw
	I0914 18:08:23.075190   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.077682   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078007   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.078040   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078309   62554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:08:23.078494   62554 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:23.078510   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.078723   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.081444   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.081846   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.081891   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.082026   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.082209   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082398   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082573   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.082739   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.082961   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.082984   62554 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:23.186143   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:23.186193   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186424   62554 buildroot.go:166] provisioning hostname "embed-certs-044534"
	I0914 18:08:23.186447   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186622   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.189085   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189453   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.189482   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189615   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.189802   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190032   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190168   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.190422   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.190587   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.190601   62554 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-044534 && echo "embed-certs-044534" | sudo tee /etc/hostname
	I0914 18:08:23.307484   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-044534
	
	I0914 18:08:23.307512   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.310220   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.310664   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310764   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.310969   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311206   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311438   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.311594   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.311802   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.311820   62554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:23.422574   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:23.422603   62554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:23.422623   62554 buildroot.go:174] setting up certificates
	I0914 18:08:23.422634   62554 provision.go:84] configureAuth start
	I0914 18:08:23.422643   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.422905   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.426201   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426557   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.426584   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426745   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.428607   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.428985   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.429016   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.429138   62554 provision.go:143] copyHostCerts
	I0914 18:08:23.429198   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:23.429211   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:23.429295   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:23.429437   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:23.429452   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:23.429498   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:23.429592   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:23.429600   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:23.429626   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:23.429680   62554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044534 san=[127.0.0.1 192.168.50.126 embed-certs-044534 localhost minikube]
	I0914 18:08:23.538590   62554 provision.go:177] copyRemoteCerts
	I0914 18:08:23.538662   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:23.538689   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.541366   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541723   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.541746   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.542120   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.542303   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.542413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.623698   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:23.647378   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 18:08:23.671327   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:08:23.694570   62554 provision.go:87] duration metric: took 271.923979ms to configureAuth
	I0914 18:08:23.694598   62554 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:23.694779   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:08:23.694868   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.697467   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.697828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.697862   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.698042   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.698249   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698421   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698571   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.698692   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.698945   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.698963   62554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:23.911661   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:23.911697   62554 machine.go:96] duration metric: took 833.189197ms to provisionDockerMachine
	I0914 18:08:23.911712   62554 start.go:293] postStartSetup for "embed-certs-044534" (driver="kvm2")
	I0914 18:08:23.911726   62554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:23.911751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.912134   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:23.912169   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.914579   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.914974   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.915011   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.915121   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.915322   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.915582   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.915710   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.996910   62554 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:24.000900   62554 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:24.000926   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:24.000998   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:24.001099   62554 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:24.001222   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:24.010496   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:24.033377   62554 start.go:296] duration metric: took 121.65145ms for postStartSetup
	I0914 18:08:24.033414   62554 fix.go:56] duration metric: took 19.487140172s for fixHost
	I0914 18:08:24.033434   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.036188   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036494   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.036524   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036672   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.036886   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037082   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037216   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.037375   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:24.037542   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:24.037554   62554 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:24.142822   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337304.118879777
	
	I0914 18:08:24.142851   62554 fix.go:216] guest clock: 1726337304.118879777
	I0914 18:08:24.142862   62554 fix.go:229] Guest: 2024-09-14 18:08:24.118879777 +0000 UTC Remote: 2024-09-14 18:08:24.03341777 +0000 UTC m=+259.160200473 (delta=85.462007ms)
	I0914 18:08:24.142936   62554 fix.go:200] guest clock delta is within tolerance: 85.462007ms
	I0914 18:08:24.142960   62554 start.go:83] releasing machines lock for "embed-certs-044534", held for 19.596720856s
	I0914 18:08:24.142992   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.143262   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:24.146122   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146501   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.146537   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146711   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147204   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147430   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147532   62554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:24.147589   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.147813   62554 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:24.147839   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.150691   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.150736   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151012   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151056   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151149   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151179   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151431   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151468   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151586   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151772   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151944   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.152034   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.256821   62554 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:24.263249   62554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:24.411996   62554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:24.418685   62554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:24.418759   62554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:24.434541   62554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:24.434569   62554 start.go:495] detecting cgroup driver to use...
	I0914 18:08:24.434655   62554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:24.452550   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:24.467548   62554 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:24.467602   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:24.482556   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:24.497198   62554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:24.625300   62554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:24.805163   62554 docker.go:233] disabling docker service ...
	I0914 18:08:24.805248   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:24.821164   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:24.834886   62554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:24.963694   62554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:25.081720   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:25.097176   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:25.116611   62554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:08:25.116677   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.129500   62554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:25.129586   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.140281   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.150925   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.166139   62554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:25.177340   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.187662   62554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.207019   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.217207   62554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:25.226988   62554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:25.227065   62554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:25.248357   62554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:25.258467   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:25.375359   62554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:25.470389   62554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:25.470470   62554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:25.475526   62554 start.go:563] Will wait 60s for crictl version
	I0914 18:08:25.475589   62554 ssh_runner.go:195] Run: which crictl
	I0914 18:08:25.479131   62554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:25.530371   62554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:25.530461   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.557035   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.586883   62554 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:08:25.588117   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:25.591212   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591600   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:25.591628   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591816   62554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:25.595706   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:25.608009   62554 kubeadm.go:883] updating cluster {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:25.608141   62554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:08:25.608194   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:25.643422   62554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:08:25.643515   62554 ssh_runner.go:195] Run: which lz4
	I0914 18:08:25.647471   62554 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:25.651573   62554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:25.651607   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:08:26.985357   62554 crio.go:462] duration metric: took 1.337911722s to copy over tarball
	I0914 18:08:26.985437   62554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:29.111492   62554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126022567s)
	I0914 18:08:29.111524   62554 crio.go:469] duration metric: took 2.12613646s to extract the tarball
	I0914 18:08:29.111533   62554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:29.148426   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:29.190595   62554 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:08:29.190620   62554 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:08:29.190628   62554 kubeadm.go:934] updating node { 192.168.50.126 8443 v1.31.1 crio true true} ...
	I0914 18:08:29.190751   62554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-044534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:29.190823   62554 ssh_runner.go:195] Run: crio config
	I0914 18:08:29.234785   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:29.234808   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:29.234818   62554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:29.234871   62554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.126 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-044534 NodeName:embed-certs-044534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:08:29.234996   62554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-044534"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:29.235054   62554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:08:29.244554   62554 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:29.244631   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:29.253622   62554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 18:08:29.270046   62554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:29.285751   62554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 18:08:29.303567   62554 ssh_runner.go:195] Run: grep 192.168.50.126	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:29.307335   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:29.319510   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:29.442649   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:29.459657   62554 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534 for IP: 192.168.50.126
	I0914 18:08:29.459687   62554 certs.go:194] generating shared ca certs ...
	I0914 18:08:29.459709   62554 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:29.459908   62554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:29.459976   62554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:29.459995   62554 certs.go:256] generating profile certs ...
	I0914 18:08:29.460166   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/client.key
	I0914 18:08:29.460247   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key.15c978c5
	I0914 18:08:29.460301   62554 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key
	I0914 18:08:29.460447   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:29.460491   62554 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:29.460505   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:29.460537   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:29.460581   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:29.460605   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:29.460649   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:29.461415   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:29.501260   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:29.531940   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:29.577959   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:29.604067   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 18:08:29.635335   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:08:29.658841   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:29.684149   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:08:29.709354   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:29.733812   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:29.758427   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:29.783599   62554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:29.802188   62554 ssh_runner.go:195] Run: openssl version
	I0914 18:08:29.808277   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:29.821167   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825911   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825978   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.832160   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:29.844395   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:29.856943   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861671   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861730   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.867506   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:29.878004   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:29.890322   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.894985   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.895053   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.900837   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:29.911102   62554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:29.915546   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:29.921470   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:29.927238   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:29.933122   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:29.938829   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:29.944811   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:29.950679   62554 kubeadm.go:392] StartCluster: {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:29.950762   62554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:29.950866   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:29.987553   62554 cri.go:89] found id: ""
	I0914 18:08:29.987626   62554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:29.998690   62554 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:29.998713   62554 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:29.998765   62554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:30.009411   62554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:30.010804   62554 kubeconfig.go:125] found "embed-certs-044534" server: "https://192.168.50.126:8443"
	I0914 18:08:30.013635   62554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:30.023903   62554 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.126
	I0914 18:08:30.023937   62554 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:30.023951   62554 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:30.024017   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:30.067767   62554 cri.go:89] found id: ""
	I0914 18:08:30.067842   62554 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:30.087326   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:30.098162   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:30.098180   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:30.098218   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:30.108239   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:30.108296   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:30.118913   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:30.129091   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:30.129172   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:30.139658   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.148838   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:30.148923   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.158386   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:30.167282   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:30.167354   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:30.176443   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:30.185476   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:30.310603   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.243123   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.457657   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.531992   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.625580   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:31.625683   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.125744   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.626056   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.126817   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.146478   62554 api_server.go:72] duration metric: took 1.520896575s to wait for apiserver process to appear ...
	I0914 18:08:33.146517   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:08:33.146543   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:33.147106   62554 api_server.go:269] stopped: https://192.168.50.126:8443/healthz: Get "https://192.168.50.126:8443/healthz": dial tcp 192.168.50.126:8443: connect: connection refused
	I0914 18:08:33.646672   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:35.687169   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.687200   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:35.687221   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:35.737352   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.737410   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:36.146777   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.151156   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.151185   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:36.647380   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.655444   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.655477   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:37.146971   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:37.151233   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:08:37.160642   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:08:37.160671   62554 api_server.go:131] duration metric: took 4.014146932s to wait for apiserver health ...
	I0914 18:08:37.160679   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:37.160686   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:37.162836   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:08:37.164378   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:08:37.183377   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:08:37.210701   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:08:37.222258   62554 system_pods.go:59] 8 kube-system pods found
	I0914 18:08:37.222304   62554 system_pods.go:61] "coredns-7c65d6cfc9-59dm5" [55e67ff8-cf54-41fc-af46-160085787f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:08:37.222316   62554 system_pods.go:61] "etcd-embed-certs-044534" [932ca8e3-a777-4bb3-bdc2-6c1f1d293d4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:08:37.222331   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [f71e6720-c32c-426f-8620-b56eadf5e33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:08:37.222351   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [b93c261f-303f-43bb-8b33-4f97dc287809] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:08:37.222359   62554 system_pods.go:61] "kube-proxy-nkdth" [3762b613-c50f-4ba9-af52-371b139f9b6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:08:37.222368   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [65da2ca2-0405-4726-a2dc-dd13519c336a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:08:37.222377   62554 system_pods.go:61] "metrics-server-6867b74b74-stwfz" [ccc73057-4710-4e41-b643-d793d9b01175] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:08:37.222393   62554 system_pods.go:61] "storage-provisioner" [660fd3e3-ce57-4275-9fe1-bcceba75d8a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:08:37.222405   62554 system_pods.go:74] duration metric: took 11.676128ms to wait for pod list to return data ...
	I0914 18:08:37.222420   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:08:37.227047   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:08:37.227087   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:08:37.227104   62554 node_conditions.go:105] duration metric: took 4.678826ms to run NodePressure ...
	I0914 18:08:37.227124   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:37.510868   62554 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515839   62554 kubeadm.go:739] kubelet initialised
	I0914 18:08:37.515863   62554 kubeadm.go:740] duration metric: took 4.967389ms waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515871   62554 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:08:37.520412   62554 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:39.528469   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:42.028017   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:44.526502   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:45.103061   63448 start.go:364] duration metric: took 2m4.701591278s to acquireMachinesLock for "default-k8s-diff-port-243449"
	I0914 18:08:45.103116   63448 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:45.103124   63448 fix.go:54] fixHost starting: 
	I0914 18:08:45.103555   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:45.103626   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:45.120496   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0914 18:08:45.121098   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:45.122023   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:08:45.122050   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:45.122440   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:45.122631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:08:45.122792   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:08:45.124473   63448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-243449: state=Stopped err=<nil>
	I0914 18:08:45.124500   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	W0914 18:08:45.124633   63448 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:45.126255   63448 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-243449" ...
	I0914 18:08:45.127296   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Start
	I0914 18:08:45.127469   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring networks are active...
	I0914 18:08:45.128415   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network default is active
	I0914 18:08:45.128823   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network mk-default-k8s-diff-port-243449 is active
	I0914 18:08:45.129257   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Getting domain xml...
	I0914 18:08:45.130055   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Creating domain...
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.027201   62554 pod_ready.go:93] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:46.027223   62554 pod_ready.go:82] duration metric: took 8.506784658s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.027232   62554 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043468   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.043499   62554 pod_ready.go:82] duration metric: took 1.016259668s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043513   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050825   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.050853   62554 pod_ready.go:82] duration metric: took 7.332421ms for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050869   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561389   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.561419   62554 pod_ready.go:82] duration metric: took 510.541663ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561434   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568265   62554 pod_ready.go:93] pod "kube-proxy-nkdth" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.568298   62554 pod_ready.go:82] duration metric: took 6.854878ms for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568312   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575898   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:48.575924   62554 pod_ready.go:82] duration metric: took 1.00760412s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575934   62554 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.464001   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting to get IP...
	I0914 18:08:46.465004   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465408   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465512   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.465391   64066 retry.go:31] will retry after 283.185405ms: waiting for machine to come up
	I0914 18:08:46.751155   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751669   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751697   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.751622   64066 retry.go:31] will retry after 307.273139ms: waiting for machine to come up
	I0914 18:08:47.060812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061855   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061889   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.061749   64066 retry.go:31] will retry after 420.077307ms: waiting for machine to come up
	I0914 18:08:47.483188   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483611   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483656   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.483567   64066 retry.go:31] will retry after 562.15435ms: waiting for machine to come up
	I0914 18:08:48.047428   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047971   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.047867   64066 retry.go:31] will retry after 744.523152ms: waiting for machine to come up
	I0914 18:08:48.793959   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794449   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794492   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.794393   64066 retry.go:31] will retry after 813.631617ms: waiting for machine to come up
	I0914 18:08:49.609483   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:49.609904   64066 retry.go:31] will retry after 941.244861ms: waiting for machine to come up
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:50.583860   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:52.826559   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:50.553210   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553735   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553764   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:50.553671   64066 retry.go:31] will retry after 1.107692241s: waiting for machine to come up
	I0914 18:08:51.663218   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663723   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663753   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:51.663681   64066 retry.go:31] will retry after 1.357435642s: waiting for machine to come up
	I0914 18:08:53.022246   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022695   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022726   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:53.022628   64066 retry.go:31] will retry after 2.045434586s: waiting for machine to come up
	I0914 18:08:55.070946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071420   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:55.071362   64066 retry.go:31] will retry after 2.084823885s: waiting for machine to come up
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.084673   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.582709   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:59.583340   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.158301   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158679   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158705   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:57.158640   64066 retry.go:31] will retry after 2.492994369s: waiting for machine to come up
	I0914 18:08:59.654137   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654550   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654585   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:59.654496   64066 retry.go:31] will retry after 3.393327124s: waiting for machine to come up
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.466791   62207 start.go:364] duration metric: took 54.917996405s to acquireMachinesLock for "no-preload-168587"
	I0914 18:09:04.466845   62207 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:09:04.466863   62207 fix.go:54] fixHost starting: 
	I0914 18:09:04.467265   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:04.467303   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:04.485295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 18:09:04.485680   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:04.486195   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:09:04.486221   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:04.486625   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:04.486825   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:04.486985   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:09:04.488546   62207 fix.go:112] recreateIfNeeded on no-preload-168587: state=Stopped err=<nil>
	I0914 18:09:04.488584   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	W0914 18:09:04.488749   62207 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:09:04.491638   62207 out.go:177] * Restarting existing kvm2 VM for "no-preload-168587" ...
	I0914 18:09:02.082684   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:04.582135   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:03.051442   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051882   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has current primary IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051904   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Found IP for machine: 192.168.61.38
	I0914 18:09:03.051946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserving static IP address...
	I0914 18:09:03.052245   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.052269   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | skip adding static IP to network mk-default-k8s-diff-port-243449 - found existing host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"}
	I0914 18:09:03.052280   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserved static IP address: 192.168.61.38
	I0914 18:09:03.052289   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for SSH to be available...
	I0914 18:09:03.052306   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Getting to WaitForSSH function...
	I0914 18:09:03.054154   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054555   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.054596   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054745   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH client type: external
	I0914 18:09:03.054777   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa (-rw-------)
	I0914 18:09:03.054813   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:03.054828   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | About to run SSH command:
	I0914 18:09:03.054841   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | exit 0
	I0914 18:09:03.178065   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:03.178576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetConfigRaw
	I0914 18:09:03.179198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.181829   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182220   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.182242   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182541   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:09:03.182773   63448 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:03.182796   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:03.182992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.185635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186027   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.186056   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186213   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.186416   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186602   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186756   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.186882   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.187123   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.187137   63448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:03.290288   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:03.290332   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290592   63448 buildroot.go:166] provisioning hostname "default-k8s-diff-port-243449"
	I0914 18:09:03.290620   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290779   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.293587   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.293981   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.294012   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.294120   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.294307   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.294708   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.294926   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.294944   63448 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-243449 && echo "default-k8s-diff-port-243449" | sudo tee /etc/hostname
	I0914 18:09:03.418148   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-243449
	
	I0914 18:09:03.418198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.421059   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421501   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.421536   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421733   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.421925   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422075   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.422394   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.422581   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.422609   63448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-243449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-243449/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-243449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:03.538785   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:03.538812   63448 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:03.538851   63448 buildroot.go:174] setting up certificates
	I0914 18:09:03.538866   63448 provision.go:84] configureAuth start
	I0914 18:09:03.538875   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.539230   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.541811   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542129   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.542183   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542393   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.544635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.544933   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.544969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.545099   63448 provision.go:143] copyHostCerts
	I0914 18:09:03.545156   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:03.545167   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:03.545239   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:03.545362   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:03.545374   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:03.545410   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:03.545489   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:03.545498   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:03.545533   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:03.545619   63448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-243449 san=[127.0.0.1 192.168.61.38 default-k8s-diff-port-243449 localhost minikube]
	I0914 18:09:03.858341   63448 provision.go:177] copyRemoteCerts
	I0914 18:09:03.858415   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:03.858453   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.861376   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.861687   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861863   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.862062   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.862231   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.862359   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:03.944043   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:03.968175   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 18:09:03.990621   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:09:04.012163   63448 provision.go:87] duration metric: took 473.28607ms to configureAuth
	I0914 18:09:04.012190   63448 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:04.012364   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:04.012431   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.015021   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015505   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.015553   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015693   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.015866   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016035   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016157   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.016277   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.016479   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.016511   63448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:04.234672   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:04.234697   63448 machine.go:96] duration metric: took 1.051909541s to provisionDockerMachine
	I0914 18:09:04.234710   63448 start.go:293] postStartSetup for "default-k8s-diff-port-243449" (driver="kvm2")
	I0914 18:09:04.234721   63448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:04.234766   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.235108   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:04.235139   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.237583   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.237964   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.237997   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.238237   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.238491   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.238667   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.238798   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.320785   63448 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:04.324837   63448 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:04.324863   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:04.324920   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:04.325001   63448 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:04.325091   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:04.334235   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:04.357310   63448 start.go:296] duration metric: took 122.582935ms for postStartSetup
	I0914 18:09:04.357352   63448 fix.go:56] duration metric: took 19.25422843s for fixHost
	I0914 18:09:04.357373   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.360190   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360574   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.360601   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360774   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.360973   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361163   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361291   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.361479   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.361658   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.361667   63448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:04.466610   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337344.436836920
	
	I0914 18:09:04.466654   63448 fix.go:216] guest clock: 1726337344.436836920
	I0914 18:09:04.466665   63448 fix.go:229] Guest: 2024-09-14 18:09:04.43683692 +0000 UTC Remote: 2024-09-14 18:09:04.357356624 +0000 UTC m=+144.091633354 (delta=79.480296ms)
	I0914 18:09:04.466691   63448 fix.go:200] guest clock delta is within tolerance: 79.480296ms
	I0914 18:09:04.466702   63448 start.go:83] releasing machines lock for "default-k8s-diff-port-243449", held for 19.363604776s
	I0914 18:09:04.466737   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.466992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:04.469873   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470148   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.470198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470364   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.470877   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471098   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471215   63448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:04.471270   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.471322   63448 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:04.471346   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.474023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474144   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474374   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474471   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474616   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474637   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.474816   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474996   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474987   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.475136   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.475274   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.587233   63448 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:04.593065   63448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:04.738721   63448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:04.745472   63448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:04.745539   63448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:04.765742   63448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:04.765806   63448 start.go:495] detecting cgroup driver to use...
	I0914 18:09:04.765909   63448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:04.782234   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:04.797259   63448 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:04.797322   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:04.811794   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:04.826487   63448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:04.953417   63448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:05.102410   63448 docker.go:233] disabling docker service ...
	I0914 18:09:05.102491   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:05.117443   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:05.131147   63448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:05.278483   63448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.401195   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:05.415794   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:05.434594   63448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:05.434660   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.445566   63448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:05.445643   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.456690   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.468044   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.479719   63448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:05.491019   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.501739   63448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.520582   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.531469   63448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:05.541741   63448 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:05.541809   63448 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:05.561648   63448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:05.571882   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:05.706592   63448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:05.811522   63448 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:05.811599   63448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:05.816676   63448 start.go:563] Will wait 60s for crictl version
	I0914 18:09:05.816745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:09:05.820367   63448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:05.862564   63448 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:05.862637   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.893106   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.927136   63448 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:04.492847   62207 main.go:141] libmachine: (no-preload-168587) Calling .Start
	I0914 18:09:04.493070   62207 main.go:141] libmachine: (no-preload-168587) Ensuring networks are active...
	I0914 18:09:04.493844   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network default is active
	I0914 18:09:04.494193   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network mk-no-preload-168587 is active
	I0914 18:09:04.494614   62207 main.go:141] libmachine: (no-preload-168587) Getting domain xml...
	I0914 18:09:04.495434   62207 main.go:141] libmachine: (no-preload-168587) Creating domain...
	I0914 18:09:05.801470   62207 main.go:141] libmachine: (no-preload-168587) Waiting to get IP...
	I0914 18:09:05.802621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:05.803215   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:05.803351   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:05.803229   64231 retry.go:31] will retry after 206.528002ms: waiting for machine to come up
	I0914 18:09:06.011556   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.012027   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.012063   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.011977   64231 retry.go:31] will retry after 252.283679ms: waiting for machine to come up
	I0914 18:09:06.266621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.267145   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.267178   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.267093   64231 retry.go:31] will retry after 376.426781ms: waiting for machine to come up
	I0914 18:09:06.644639   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.645212   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.645245   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.645161   64231 retry.go:31] will retry after 518.904946ms: waiting for machine to come up
	I0914 18:09:06.584604   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:09.085179   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:05.928171   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:05.931131   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931584   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:05.931662   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931826   63448 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:05.935729   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:05.947741   63448 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:05.947872   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:05.947935   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:05.984371   63448 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:05.984473   63448 ssh_runner.go:195] Run: which lz4
	I0914 18:09:05.988311   63448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:09:05.992088   63448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:09:05.992123   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:09:07.311157   63448 crio.go:462] duration metric: took 1.322885925s to copy over tarball
	I0914 18:09:07.311297   63448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:09:09.472639   63448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161311106s)
	I0914 18:09:09.472663   63448 crio.go:469] duration metric: took 2.161473132s to extract the tarball
	I0914 18:09:09.472670   63448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:09:09.508740   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:09.554508   63448 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:09:09.554533   63448 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:09:09.554548   63448 kubeadm.go:934] updating node { 192.168.61.38 8444 v1.31.1 crio true true} ...
	I0914 18:09:09.554657   63448 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-243449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:09.554722   63448 ssh_runner.go:195] Run: crio config
	I0914 18:09:09.603693   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:09.603715   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:09.603727   63448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:09.603745   63448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-243449 NodeName:default-k8s-diff-port-243449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:09.603879   63448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-243449"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:09.603935   63448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:09.613786   63448 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:09.613863   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:09.623172   63448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0914 18:09:09.641437   63448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:09.657677   63448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:09:09.675042   63448 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:09.678885   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:09.694466   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:09.823504   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:09.840638   63448 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449 for IP: 192.168.61.38
	I0914 18:09:09.840658   63448 certs.go:194] generating shared ca certs ...
	I0914 18:09:09.840677   63448 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:09.840827   63448 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:09.840869   63448 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:09.840879   63448 certs.go:256] generating profile certs ...
	I0914 18:09:09.841046   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/client.key
	I0914 18:09:09.841147   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key.68770133
	I0914 18:09:09.841231   63448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key
	I0914 18:09:09.841342   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:09.841370   63448 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:09.841377   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:09.841398   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:09.841425   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:09.841447   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:09.841499   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:09.842211   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:09.883406   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:09.914134   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:09.941343   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:09.990870   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 18:09:10.040821   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:10.065238   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:10.089901   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:09:10.114440   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:10.138963   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:10.162828   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:10.185702   63448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:10.201251   63448 ssh_runner.go:195] Run: openssl version
	I0914 18:09:10.206904   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:10.216966   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221437   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221506   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.227033   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:10.237039   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:10.247244   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251434   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251494   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.257187   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:10.267490   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:10.277622   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281705   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281789   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.287013   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:10.296942   63448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.165576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.166187   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.166219   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.166125   64231 retry.go:31] will retry after 631.376012ms: waiting for machine to come up
	I0914 18:09:07.798978   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.799450   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.799478   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.799404   64231 retry.go:31] will retry after 668.764795ms: waiting for machine to come up
	I0914 18:09:08.470207   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:08.470613   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:08.470640   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:08.470559   64231 retry.go:31] will retry after 943.595216ms: waiting for machine to come up
	I0914 18:09:09.415274   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:09.415721   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:09.415751   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:09.415675   64231 retry.go:31] will retry after 956.638818ms: waiting for machine to come up
	I0914 18:09:10.374297   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:10.374875   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:10.374902   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:10.374822   64231 retry.go:31] will retry after 1.703915418s: waiting for machine to come up
	I0914 18:09:11.583370   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:14.082919   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:10.301352   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:10.307276   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:10.313391   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:10.319883   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:10.325671   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:10.331445   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:10.336855   63448 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:10.336953   63448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:10.337019   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.372899   63448 cri.go:89] found id: ""
	I0914 18:09:10.372988   63448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:10.386897   63448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:10.386920   63448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:10.386978   63448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:10.399165   63448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:10.400212   63448 kubeconfig.go:125] found "default-k8s-diff-port-243449" server: "https://192.168.61.38:8444"
	I0914 18:09:10.402449   63448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:10.414129   63448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.38
	I0914 18:09:10.414192   63448 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:10.414207   63448 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:10.414276   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.454549   63448 cri.go:89] found id: ""
	I0914 18:09:10.454627   63448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:10.472261   63448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:10.481693   63448 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:10.481724   63448 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:10.481772   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 18:09:10.492205   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:10.492283   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:10.502923   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 18:09:10.511620   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:10.511688   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:10.520978   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.529590   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:10.529652   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.538602   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 18:09:10.546968   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:10.547037   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:10.556280   63448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:10.565471   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:10.670297   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.611646   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.858308   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.942761   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:12.018144   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:12.018251   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.518933   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.019098   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.518297   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.018327   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.033874   63448 api_server.go:72] duration metric: took 2.015718891s to wait for apiserver process to appear ...
	I0914 18:09:14.033902   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:14.033926   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:14.034534   63448 api_server.go:269] stopped: https://192.168.61.38:8444/healthz: Get "https://192.168.61.38:8444/healthz": dial tcp 192.168.61.38:8444: connect: connection refused
	I0914 18:09:14.534065   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.080547   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:12.081149   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:12.081174   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:12.081095   64231 retry.go:31] will retry after 1.634645735s: waiting for machine to come up
	I0914 18:09:13.717239   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:13.717787   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:13.717821   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:13.717731   64231 retry.go:31] will retry after 2.524549426s: waiting for machine to come up
	I0914 18:09:16.244729   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:16.245132   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:16.245162   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:16.245072   64231 retry.go:31] will retry after 2.539965892s: waiting for machine to come up
	I0914 18:09:16.083603   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:18.581965   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:16.427071   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.427109   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.427156   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.440812   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.440848   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.534060   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.593356   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:16.593412   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.034545   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.039094   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.039131   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.534668   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.543018   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.543053   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.034612   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.039042   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.039071   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.534675   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.540612   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.540637   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.034196   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.040397   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.040429   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.535035   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.540910   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.540940   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:20.034275   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:20.038541   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:09:20.044704   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:20.044734   63448 api_server.go:131] duration metric: took 6.010822563s to wait for apiserver health ...
	I0914 18:09:20.044744   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:20.044752   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:20.046616   63448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:20.047724   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:20.058152   63448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:20.077880   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:20.090089   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:20.090135   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:20.090148   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:20.090178   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:20.090192   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:20.090199   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:09:20.090210   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:20.090219   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:20.090226   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:09:20.090236   63448 system_pods.go:74] duration metric: took 12.327834ms to wait for pod list to return data ...
	I0914 18:09:20.090248   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:20.094429   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:20.094455   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:20.094468   63448 node_conditions.go:105] duration metric: took 4.21448ms to run NodePressure ...
	I0914 18:09:20.094486   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.357111   63448 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361447   63448 kubeadm.go:739] kubelet initialised
	I0914 18:09:20.361469   63448 kubeadm.go:740] duration metric: took 4.331134ms waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361479   63448 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:20.367027   63448 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.371669   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371697   63448 pod_ready.go:82] duration metric: took 4.644689ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.371706   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371714   63448 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.376461   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376486   63448 pod_ready.go:82] duration metric: took 4.764316ms for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.376497   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376506   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.380607   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380632   63448 pod_ready.go:82] duration metric: took 4.117696ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.380642   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380649   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.481883   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481920   63448 pod_ready.go:82] duration metric: took 101.262101ms for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.481935   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481965   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.881501   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881541   63448 pod_ready.go:82] duration metric: took 399.559576ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.881556   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881566   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.282414   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282446   63448 pod_ready.go:82] duration metric: took 400.860884ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.282463   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282472   63448 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.681717   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681757   63448 pod_ready.go:82] duration metric: took 399.273892ms for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.681773   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681783   63448 pod_ready.go:39] duration metric: took 1.320292845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:21.681825   63448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:09:21.693644   63448 ops.go:34] apiserver oom_adj: -16
	I0914 18:09:21.693682   63448 kubeadm.go:597] duration metric: took 11.306754096s to restartPrimaryControlPlane
	I0914 18:09:21.693696   63448 kubeadm.go:394] duration metric: took 11.356851178s to StartCluster
	I0914 18:09:21.693719   63448 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.693820   63448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:09:21.695521   63448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.695793   63448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:09:21.695903   63448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:09:21.695982   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:21.696003   63448 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696021   63448 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696029   63448 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696041   63448 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:09:21.696044   63448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-243449"
	I0914 18:09:21.696063   63448 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696094   63448 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696108   63448 addons.go:243] addon metrics-server should already be in state true
	I0914 18:09:21.696149   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696074   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696411   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696455   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696543   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696562   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696693   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696735   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.697719   63448 out.go:177] * Verifying Kubernetes components...
	I0914 18:09:21.699171   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:21.712479   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0914 18:09:21.712563   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0914 18:09:21.713050   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713065   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713585   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713601   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713613   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713633   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713940   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714122   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.714135   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714737   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.714789   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.716503   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 18:09:21.716977   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.717490   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.717514   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.717872   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.718055   63448 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.718075   63448 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:09:21.718105   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.718432   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718484   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.718491   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718527   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.737248   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0914 18:09:21.738874   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.739437   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.739460   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.739865   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.740121   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.742251   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.744281   63448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:21.745631   63448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:21.745656   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:09:21.745682   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.749856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750398   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.750424   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.750886   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.751029   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.751187   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.756605   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0914 18:09:21.756825   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0914 18:09:21.757040   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757293   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757562   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.757588   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758058   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.758301   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.758322   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758325   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.758717   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.759300   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.759342   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.760557   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.762845   63448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:09:18.787883   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:18.788270   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:18.788298   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:18.788225   64231 retry.go:31] will retry after 4.53698887s: waiting for machine to come up
	I0914 18:09:21.764071   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:09:21.764092   63448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:09:21.764116   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.767725   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768255   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.768367   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768503   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.768681   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.768856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.769030   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.776783   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0914 18:09:21.777226   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.777736   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.777754   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.778113   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.778345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.780215   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.780421   63448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:21.780436   63448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:09:21.780458   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.783243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783671   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.783698   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783857   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.784023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.784138   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.784256   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.919649   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:21.945515   63448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:22.020487   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:09:22.020509   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:09:22.041265   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:22.072169   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:09:22.072199   63448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:09:22.112117   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.112148   63448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:09:22.146636   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:22.162248   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.520416   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520448   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.520793   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.520815   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.520831   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520833   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.520840   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.521074   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.521119   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.527992   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.528030   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.528578   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.528581   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.528605   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246463   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084175525s)
	I0914 18:09:23.246520   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246535   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246564   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099889297s)
	I0914 18:09:23.246609   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246621   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246835   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246876   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.246888   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246897   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246910   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246958   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247002   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247021   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.247046   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.247156   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.247192   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247227   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247260   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-243449"
	I0914 18:09:23.250385   63448 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 18:09:20.583600   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.083187   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.251609   63448 addons.go:510] duration metric: took 1.555716144s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 18:09:23.949715   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.327287   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327775   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has current primary IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327803   62207 main.go:141] libmachine: (no-preload-168587) Found IP for machine: 192.168.39.38
	I0914 18:09:23.327822   62207 main.go:141] libmachine: (no-preload-168587) Reserving static IP address...
	I0914 18:09:23.328197   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.328221   62207 main.go:141] libmachine: (no-preload-168587) Reserved static IP address: 192.168.39.38
	I0914 18:09:23.328264   62207 main.go:141] libmachine: (no-preload-168587) DBG | skip adding static IP to network mk-no-preload-168587 - found existing host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"}
	I0914 18:09:23.328283   62207 main.go:141] libmachine: (no-preload-168587) DBG | Getting to WaitForSSH function...
	I0914 18:09:23.328295   62207 main.go:141] libmachine: (no-preload-168587) Waiting for SSH to be available...
	I0914 18:09:23.330598   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.330954   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.330985   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.331105   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH client type: external
	I0914 18:09:23.331132   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa (-rw-------)
	I0914 18:09:23.331184   62207 main.go:141] libmachine: (no-preload-168587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:23.331196   62207 main.go:141] libmachine: (no-preload-168587) DBG | About to run SSH command:
	I0914 18:09:23.331208   62207 main.go:141] libmachine: (no-preload-168587) DBG | exit 0
	I0914 18:09:23.454525   62207 main.go:141] libmachine: (no-preload-168587) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:23.454883   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetConfigRaw
	I0914 18:09:23.455505   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.457696   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458030   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.458069   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458372   62207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:09:23.458611   62207 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:23.458633   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:23.458828   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.461199   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461540   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.461576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461705   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.461895   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462006   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462153   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.462314   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.462477   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.462488   62207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:23.566278   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:23.566310   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566559   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:09:23.566581   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566742   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.569254   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569590   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.569617   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569713   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.569888   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570009   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570174   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.570344   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.570556   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.570575   62207 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-168587 && echo "no-preload-168587" | sudo tee /etc/hostname
	I0914 18:09:23.687805   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-168587
	
	I0914 18:09:23.687848   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.690447   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.690824   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690955   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.691135   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691279   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691427   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.691590   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.691768   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.691790   62207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-168587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-168587/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-168587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:23.805502   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:23.805527   62207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:23.805545   62207 buildroot.go:174] setting up certificates
	I0914 18:09:23.805553   62207 provision.go:84] configureAuth start
	I0914 18:09:23.805561   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.805798   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.808306   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808643   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.808668   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808819   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.811055   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811374   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.811401   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811586   62207 provision.go:143] copyHostCerts
	I0914 18:09:23.811647   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:23.811657   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:23.811712   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:23.811800   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:23.811808   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:23.811829   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:23.811880   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:23.811887   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:23.811908   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:23.811956   62207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.no-preload-168587 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-168587]
	I0914 18:09:24.051868   62207 provision.go:177] copyRemoteCerts
	I0914 18:09:24.051936   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:24.051958   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.054842   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055107   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.055138   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055321   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.055514   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.055664   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.055804   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.140378   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:24.168422   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:09:24.194540   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:09:24.217910   62207 provision.go:87] duration metric: took 412.343545ms to configureAuth
	I0914 18:09:24.217942   62207 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:24.218180   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:24.218255   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.220788   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221216   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.221259   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221408   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.221678   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.221842   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.222033   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.222218   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.222399   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.222417   62207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:24.433203   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:24.433230   62207 machine.go:96] duration metric: took 974.605605ms to provisionDockerMachine
	I0914 18:09:24.433241   62207 start.go:293] postStartSetup for "no-preload-168587" (driver="kvm2")
	I0914 18:09:24.433253   62207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:24.433282   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.433595   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:24.433625   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.436247   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436710   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.436746   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436855   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.437015   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.437189   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.437305   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.516493   62207 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:24.520486   62207 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:24.520518   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:24.520612   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:24.520687   62207 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:24.520775   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:24.530274   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:24.553381   62207 start.go:296] duration metric: took 120.123302ms for postStartSetup
	I0914 18:09:24.553422   62207 fix.go:56] duration metric: took 20.086564499s for fixHost
	I0914 18:09:24.553445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.555832   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556100   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.556133   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556376   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.556605   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556772   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556922   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.557062   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.557275   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.557285   62207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:24.659101   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337364.632455119
	
	I0914 18:09:24.659128   62207 fix.go:216] guest clock: 1726337364.632455119
	I0914 18:09:24.659139   62207 fix.go:229] Guest: 2024-09-14 18:09:24.632455119 +0000 UTC Remote: 2024-09-14 18:09:24.553426386 +0000 UTC m=+357.567907862 (delta=79.028733ms)
	I0914 18:09:24.659165   62207 fix.go:200] guest clock delta is within tolerance: 79.028733ms
	I0914 18:09:24.659171   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 20.192350446s
	I0914 18:09:24.659209   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.659445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:24.662626   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663051   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.663082   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663225   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663802   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663972   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.664063   62207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:24.664114   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.664195   62207 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:24.664221   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.666971   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667255   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667398   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667433   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667555   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.667753   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.667787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667816   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667913   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.667988   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.668058   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.668109   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.668236   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.668356   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.743805   62207 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:24.776583   62207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:24.924635   62207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:24.930891   62207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:24.930979   62207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:24.952228   62207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:24.952258   62207 start.go:495] detecting cgroup driver to use...
	I0914 18:09:24.952344   62207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:24.967770   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:24.983218   62207 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:24.983280   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:24.997311   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:25.011736   62207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:25.135920   62207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:25.323727   62207 docker.go:233] disabling docker service ...
	I0914 18:09:25.323793   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:25.341243   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:25.358703   62207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:25.495826   62207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:25.621684   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:25.637386   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:25.655826   62207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:25.655947   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.669204   62207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:25.669266   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.680265   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.690860   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.702002   62207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:25.713256   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.724125   62207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.742195   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.752680   62207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:25.762842   62207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:25.762920   62207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:25.775680   62207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:25.785190   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:25.907175   62207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:25.995654   62207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:25.995731   62207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:26.000829   62207 start.go:563] Will wait 60s for crictl version
	I0914 18:09:26.000896   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.004522   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:26.041674   62207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:26.041745   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.069091   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.107475   62207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:26.108650   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:26.111782   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112110   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:26.112139   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112279   62207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:26.116339   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:26.128616   62207 kubeadm.go:883] updating cluster {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:26.128755   62207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:26.128796   62207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:26.165175   62207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:26.165197   62207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:09:26.165282   62207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.165301   62207 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 18:09:26.165302   62207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.165276   62207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.165346   62207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.165309   62207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.165443   62207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.165451   62207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.166853   62207 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 18:09:26.166858   62207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.166864   62207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.166873   62207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.166911   62207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.166928   62207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.366393   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.398127   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 18:09:26.401173   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.405861   62207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 18:09:26.405910   62207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.405982   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.410513   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.411414   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.416692   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.417710   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643066   62207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 18:09:26.643114   62207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.643177   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643195   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.643242   62207 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 18:09:26.643278   62207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 18:09:26.643293   62207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 18:09:26.643282   62207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.643307   62207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.643323   62207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.643328   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643351   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643366   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643386   62207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 18:09:26.643412   62207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643436   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.654984   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.655035   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.733881   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.733967   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.769624   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.778708   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.778836   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.778855   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.821344   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.821358   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.899012   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.906693   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.909875   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.916458   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.944355   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.949250   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 18:09:26.949400   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:25.582447   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:28.084142   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:25.949851   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:26.950390   63448 node_ready.go:49] node "default-k8s-diff-port-243449" has status "Ready":"True"
	I0914 18:09:26.950418   63448 node_ready.go:38] duration metric: took 5.004868966s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:26.950430   63448 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:26.956875   63448 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963909   63448 pod_ready.go:93] pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:26.963935   63448 pod_ready.go:82] duration metric: took 7.027533ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963945   63448 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971297   63448 pod_ready.go:93] pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.971327   63448 pod_ready.go:82] duration metric: took 2.007374825s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971340   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977510   63448 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.977535   63448 pod_ready.go:82] duration metric: took 6.18573ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977557   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.035840   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 18:09:27.035956   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:27.040828   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 18:09:27.040939   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 18:09:27.040941   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:27.041026   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:27.048278   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 18:09:27.048345   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 18:09:27.048388   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:27.048390   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 18:09:27.048446   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048423   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 18:09:27.048482   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048431   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:27.052221   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 18:09:27.052401   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 18:09:27.052585   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 18:09:27.330779   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.721998   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.673483443s)
	I0914 18:09:29.722035   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 18:09:29.722064   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722076   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.673496811s)
	I0914 18:09:29.722112   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 18:09:29.722112   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722194   62207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.391387893s)
	I0914 18:09:29.722236   62207 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 18:09:29.722257   62207 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.722297   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:31.485714   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.76356866s)
	I0914 18:09:31.485744   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 18:09:31.485764   62207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485817   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485820   62207 ssh_runner.go:235] Completed: which crictl: (1.763506603s)
	I0914 18:09:31.485862   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:30.583013   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:33.083597   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.985230   63448 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:31.984182   63448 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.984203   63448 pod_ready.go:82] duration metric: took 3.006637599s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.984212   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989786   63448 pod_ready.go:93] pod "kube-proxy-gbkqm" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.989812   63448 pod_ready.go:82] duration metric: took 5.592466ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989823   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994224   63448 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.994246   63448 pod_ready.go:82] duration metric: took 4.414059ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994258   63448 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:34.001035   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.781678   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.295763296s)
	I0914 18:09:34.781783   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:34.781814   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.295968995s)
	I0914 18:09:34.781840   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 18:09:34.781868   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:34.781900   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:36.744459   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.962646981s)
	I0914 18:09:36.744514   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.962587733s)
	I0914 18:09:36.744551   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 18:09:36.744576   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:36.744590   62207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:36.744658   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:35.582596   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.083260   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:36.002284   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.002962   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.848091   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.103407014s)
	I0914 18:09:38.848126   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 18:09:38.848152   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848217   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848153   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.103554199s)
	I0914 18:09:38.848283   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:09:38.848368   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307247   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.459002378s)
	I0914 18:09:40.307287   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 18:09:40.307269   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458886581s)
	I0914 18:09:40.307327   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 18:09:40.307334   62207 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307382   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.958177   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 18:09:40.958222   62207 cache_images.go:123] Successfully loaded all cached images
	I0914 18:09:40.958228   62207 cache_images.go:92] duration metric: took 14.793018447s to LoadCachedImages
	I0914 18:09:40.958241   62207 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.1 crio true true} ...
	I0914 18:09:40.958347   62207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-168587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:40.958415   62207 ssh_runner.go:195] Run: crio config
	I0914 18:09:41.003620   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:41.003643   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:41.003653   62207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:41.003674   62207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-168587 NodeName:no-preload-168587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:41.003850   62207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-168587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:41.003920   62207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:41.014462   62207 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:41.014541   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:41.023964   62207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 18:09:41.040206   62207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:41.055630   62207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 18:09:41.072881   62207 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:41.076449   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:41.090075   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:41.210405   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:41.228173   62207 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587 for IP: 192.168.39.38
	I0914 18:09:41.228197   62207 certs.go:194] generating shared ca certs ...
	I0914 18:09:41.228213   62207 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:41.228383   62207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:41.228443   62207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:41.228457   62207 certs.go:256] generating profile certs ...
	I0914 18:09:41.228586   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.key
	I0914 18:09:41.228667   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key.d11ec263
	I0914 18:09:41.228731   62207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key
	I0914 18:09:41.228889   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:41.228932   62207 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:41.228944   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:41.228976   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:41.229008   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:41.229045   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:41.229102   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:41.229913   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:41.259871   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:41.286359   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:41.315410   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:41.345541   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:09:41.380128   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:41.411130   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:41.442136   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:09:41.464823   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:41.488153   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:41.513788   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:41.537256   62207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:41.553550   62207 ssh_runner.go:195] Run: openssl version
	I0914 18:09:41.559366   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:41.571498   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576889   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576947   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.583651   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:41.594743   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:41.605811   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610034   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610103   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.615810   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:41.627145   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:41.639956   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644647   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644705   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.650281   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:41.662354   62207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:41.667150   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:41.673263   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:41.680660   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:41.687283   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:41.693256   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:41.698969   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:41.704543   62207 kubeadm.go:392] StartCluster: {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:41.704671   62207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:41.704750   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.741255   62207 cri.go:89] found id: ""
	I0914 18:09:41.741354   62207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:41.751360   62207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:41.751377   62207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:41.751417   62207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:41.761492   62207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:41.762591   62207 kubeconfig.go:125] found "no-preload-168587" server: "https://192.168.39.38:8443"
	I0914 18:09:41.764876   62207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:41.774868   62207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0914 18:09:41.774901   62207 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:41.774913   62207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:41.774969   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.810189   62207 cri.go:89] found id: ""
	I0914 18:09:41.810248   62207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:41.827903   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:41.837504   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:41.837532   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:41.837585   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:09:41.846260   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:41.846322   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:41.855350   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:09:41.864096   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:41.864153   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:41.874772   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.885427   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:41.885502   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.897121   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:09:41.906955   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:41.907020   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:41.918253   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:41.930134   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:40.084800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:42.581757   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:44.583611   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.502272   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:43.001471   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.054830   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.754174   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.973037   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.043041   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.119704   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:43.119805   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.620541   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.120849   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.139382   62207 api_server.go:72] duration metric: took 1.019679094s to wait for apiserver process to appear ...
	I0914 18:09:44.139406   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:44.139424   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:44.139876   62207 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0914 18:09:44.639981   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.262096   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.262132   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.262151   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.280626   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.280652   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.640152   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.646640   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:47.646676   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.140256   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.145520   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:48.145557   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.640147   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.645032   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:09:48.654567   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:48.654600   62207 api_server.go:131] duration metric: took 4.515188826s to wait for apiserver health ...
	I0914 18:09:48.654609   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:48.654615   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:48.656828   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:47.082431   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:49.582001   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.500938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:48.002332   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.658151   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:48.692232   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:48.734461   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:48.746689   62207 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:48.746723   62207 system_pods.go:61] "coredns-7c65d6cfc9-mwhvh" [38800077-a7ff-4c8c-8375-4efac2ae40b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:48.746733   62207 system_pods.go:61] "etcd-no-preload-168587" [bdb166bb-8c07-448c-a97c-2146e84f139b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:48.746744   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [8ad59d56-cb86-4028-bf16-3733eb32ad8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:48.746752   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [fd66d0aa-cc35-4330-aa6b-571dbeaa6490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:48.746761   62207 system_pods.go:61] "kube-proxy-lvp9h" [75c154d8-c76d-49eb-9497-dd17199e9d20] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:09:48.746771   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [858c948b-9025-48ab-907a-5b69aefbb24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:48.746782   62207 system_pods.go:61] "metrics-server-6867b74b74-n276z" [69e25ed4-dc8e-4c68-955e-e7226d066ac4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:48.746790   62207 system_pods.go:61] "storage-provisioner" [41c92694-2d3a-4025-8e28-ddea7b9b9c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:09:48.746801   62207 system_pods.go:74] duration metric: took 12.315296ms to wait for pod list to return data ...
	I0914 18:09:48.746811   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:48.751399   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:48.751428   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:48.751440   62207 node_conditions.go:105] duration metric: took 4.625335ms to run NodePressure ...
	I0914 18:09:48.751461   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:49.051211   62207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057333   62207 kubeadm.go:739] kubelet initialised
	I0914 18:09:49.057366   62207 kubeadm.go:740] duration metric: took 6.124032ms waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057379   62207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:49.062570   62207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:51.069219   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:51.588043   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:54.082931   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.499759   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:52.502450   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.000767   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.069338   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:53.570290   62207 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:53.570323   62207 pod_ready.go:82] duration metric: took 4.507716999s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:53.570333   62207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:55.577317   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:56.581937   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:58.583632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:57.000913   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.001429   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:57.578803   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.576525   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:59.576547   62207 pod_ready.go:82] duration metric: took 6.006207927s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:59.576556   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084027   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.084054   62207 pod_ready.go:82] duration metric: took 507.490867ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084067   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089044   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.089068   62207 pod_ready.go:82] duration metric: took 4.991847ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089079   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093160   62207 pod_ready.go:93] pod "kube-proxy-lvp9h" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.093179   62207 pod_ready.go:82] duration metric: took 4.093257ms for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093198   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096786   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.096800   62207 pod_ready.go:82] duration metric: took 3.594996ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096807   62207 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:01.082601   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:03.581290   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:01.502864   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.001645   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.103118   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.103451   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.603049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.582900   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.083020   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.500670   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.500752   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:08.604603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.103035   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:10.581739   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:12.582523   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:14.582676   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.000857   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:13.000915   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.001474   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:13.103818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.602921   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.082737   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:19.082771   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.501947   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.000464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:18.102598   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.104053   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:21.085272   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:23.583185   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:22.001396   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.500424   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:22.602815   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.603230   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.604416   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.082291   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:28.582007   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.501088   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:29.001400   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:28.604725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.103833   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:30.582800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.082884   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.001447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.501525   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:33.603121   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.604573   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.083689   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:37.583721   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:36.000829   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:38.001022   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.002742   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:38.102910   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.103826   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.082907   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.083587   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:44.582457   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.500851   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.001069   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.603017   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.104603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.082010   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:49.082648   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.500917   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.001192   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:47.603610   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.104612   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.083246   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:53.582120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:52.001273   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:54.001308   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:52.603604   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.104734   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.582258   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.081905   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:56.002210   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.499961   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.604025   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.104193   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.083208   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.582299   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.583803   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.500596   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.501405   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.501527   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:02.602818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:05.103061   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:07.083161   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.585702   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.999885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.001611   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:07.603634   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.605513   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.082145   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.082616   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:11.500951   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.001238   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:12.103165   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.103338   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.103440   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.083173   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.582225   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.001344   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.501001   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:18.603529   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.607347   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.583176   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.082414   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.501464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.000161   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.000597   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:23.103266   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.104091   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.082804   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.582345   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.582482   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.001813   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.500751   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.105655   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.602893   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:32.082229   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.082649   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:31.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.001997   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:32.103194   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.103615   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.603139   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.083120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.582197   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.500663   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.501005   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:38.603823   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:41.102313   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.583132   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.082387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.501996   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.000447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:43.103756   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.082762   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.582353   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:49.584026   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.500451   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.501831   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.001778   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:47.605117   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.103023   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.082751   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.582016   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.501977   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.000564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.603110   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.603267   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:56.582121   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:58.583277   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:57.001624   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.501328   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:57.102934   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.103345   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.604125   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.083045   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:03.582885   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.501890   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:04.001023   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.103388   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.103725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.083003   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.581415   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.002052   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.500393   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:08.602880   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.604231   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.582647   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.082818   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.500953   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.001310   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.103254   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.103641   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.083238   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.582387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.501029   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.505028   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.001929   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:17.103996   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:19.603109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:21.603150   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.083547   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.586009   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.002367   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:24.500394   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:23.603351   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:26.103668   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.082824   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.582534   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.001617   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:29.002224   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:28.104044   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.602717   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.083027   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:32.083454   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:34.582698   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.002459   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:33.500924   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.603673   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.103528   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:37.081632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.082269   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.501391   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:38.000592   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:40.001479   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:37.602537   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.603422   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.082861   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:43.583680   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:42.002259   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.500927   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:42.103521   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.603744   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:46.082955   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:48.576914   62554 pod_ready.go:82] duration metric: took 4m0.000963266s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	E0914 18:12:48.576953   62554 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:12:48.576972   62554 pod_ready.go:39] duration metric: took 4m11.061091965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:12:48.576996   62554 kubeadm.go:597] duration metric: took 4m18.578277603s to restartPrimaryControlPlane
	W0914 18:12:48.577052   62554 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:48.577082   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:46.501278   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.001649   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.103834   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.104052   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.603479   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.500469   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:53.500885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.604109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.104639   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:55.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:57.501413   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:00.000020   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:12:58.604461   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:01.102988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:02.001195   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:04.001938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:03.603700   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.103294   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.501564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:09.002049   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:08.604408   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:11.103401   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:14.788734   62554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.2116254s)
	I0914 18:13:14.788816   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:14.810488   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:13:14.827773   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:13:14.846933   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:13:14.846958   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:13:14.847011   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:13:14.859886   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:13:14.859954   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:13:14.882400   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:13:14.896700   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:13:14.896779   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:13:14.908567   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.920718   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:13:14.920791   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.930849   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:13:14.940757   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:13:14.940829   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:13:14.950828   62554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:13:15.000219   62554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:13:15.000292   62554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:13:15.116662   62554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:13:15.116830   62554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:13:15.116937   62554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:13:15.128493   62554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:13:11.002219   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:13.500397   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.130231   62554 out.go:235]   - Generating certificates and keys ...
	I0914 18:13:15.130322   62554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:13:15.130412   62554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:13:15.130513   62554 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:13:15.130642   62554 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:13:15.130762   62554 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:13:15.130842   62554 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:13:15.130927   62554 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:13:15.131020   62554 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:13:15.131131   62554 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:13:15.131235   62554 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:13:15.131325   62554 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:13:15.131417   62554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:13:15.454691   62554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:13:15.653046   62554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:13:15.704029   62554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:13:15.846280   62554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:13:15.926881   62554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:13:15.927633   62554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:13:15.932596   62554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:13:13.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.603335   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.934499   62554 out.go:235]   - Booting up control plane ...
	I0914 18:13:15.934626   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:13:15.934761   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:13:15.934913   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:13:15.952982   62554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:13:15.961449   62554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:13:15.961526   62554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:13:16.102126   62554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:13:16.102335   62554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:13:16.604217   62554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.082287ms
	I0914 18:13:16.604330   62554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:13:15.501231   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:17.501427   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:19.501641   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.609408   62554 kubeadm.go:310] [api-check] The API server is healthy after 5.002255971s
	I0914 18:13:21.622798   62554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:13:21.637103   62554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:13:21.676498   62554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:13:21.676739   62554 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-044534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:13:21.697522   62554 kubeadm.go:310] [bootstrap-token] Using token: oo4rrp.xx4py1wjxiu1i6la
	I0914 18:13:17.604060   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:20.103115   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.699311   62554 out.go:235]   - Configuring RBAC rules ...
	I0914 18:13:21.699462   62554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:13:21.711614   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:13:21.721449   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:13:21.727812   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:13:21.733486   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:13:21.747521   62554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:13:22.014670   62554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:13:22.463865   62554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:13:23.016165   62554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:13:23.016195   62554 kubeadm.go:310] 
	I0914 18:13:23.016257   62554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:13:23.016265   62554 kubeadm.go:310] 
	I0914 18:13:23.016385   62554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:13:23.016415   62554 kubeadm.go:310] 
	I0914 18:13:23.016456   62554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:13:23.016542   62554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:13:23.016627   62554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:13:23.016637   62554 kubeadm.go:310] 
	I0914 18:13:23.016753   62554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:13:23.016778   62554 kubeadm.go:310] 
	I0914 18:13:23.016850   62554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:13:23.016860   62554 kubeadm.go:310] 
	I0914 18:13:23.016937   62554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:13:23.017051   62554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:13:23.017142   62554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:13:23.017156   62554 kubeadm.go:310] 
	I0914 18:13:23.017284   62554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:13:23.017403   62554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:13:23.017419   62554 kubeadm.go:310] 
	I0914 18:13:23.017533   62554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.017664   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:13:23.017700   62554 kubeadm.go:310] 	--control-plane 
	I0914 18:13:23.017710   62554 kubeadm.go:310] 
	I0914 18:13:23.017821   62554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:13:23.017832   62554 kubeadm.go:310] 
	I0914 18:13:23.017944   62554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.018104   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:13:23.019098   62554 kubeadm.go:310] W0914 18:13:14.968906    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019512   62554 kubeadm.go:310] W0914 18:13:14.970621    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019672   62554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:13:23.019690   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:13:23.019704   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:13:23.021459   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:13:23.022517   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:13:23.037352   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:13:23.062037   62554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:13:23.062132   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.062202   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044534 minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=embed-certs-044534 minikube.k8s.io/primary=true
	I0914 18:13:23.089789   62554 ops.go:34] apiserver oom_adj: -16
	I0914 18:13:23.246478   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.747419   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.247388   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.746913   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:21.502222   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.001757   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:25.247445   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:25.747417   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.247440   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.747262   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.847454   62554 kubeadm.go:1113] duration metric: took 3.78538549s to wait for elevateKubeSystemPrivileges
	I0914 18:13:26.847496   62554 kubeadm.go:394] duration metric: took 4m56.896825398s to StartCluster
	I0914 18:13:26.847521   62554 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.847618   62554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:13:26.850148   62554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.850488   62554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:13:26.850562   62554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:13:26.850672   62554 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-044534"
	I0914 18:13:26.850690   62554 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-044534"
	W0914 18:13:26.850703   62554 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:13:26.850715   62554 addons.go:69] Setting default-storageclass=true in profile "embed-certs-044534"
	I0914 18:13:26.850734   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.850753   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:13:26.850752   62554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044534"
	I0914 18:13:26.850716   62554 addons.go:69] Setting metrics-server=true in profile "embed-certs-044534"
	I0914 18:13:26.850844   62554 addons.go:234] Setting addon metrics-server=true in "embed-certs-044534"
	W0914 18:13:26.850860   62554 addons.go:243] addon metrics-server should already be in state true
	I0914 18:13:26.850898   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.851174   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851204   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851214   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851235   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851250   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851273   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.852030   62554 out.go:177] * Verifying Kubernetes components...
	I0914 18:13:26.853580   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:13:26.868084   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0914 18:13:26.868135   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0914 18:13:26.868700   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.868787   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.869251   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869282   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.869637   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.869650   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869714   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.870039   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.870232   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.870396   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.870454   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.871718   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0914 18:13:26.872337   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.872842   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.872870   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.873227   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.873942   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.873989   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.874235   62554 addons.go:234] Setting addon default-storageclass=true in "embed-certs-044534"
	W0914 18:13:26.874257   62554 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:13:26.874287   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.874674   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.874721   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.887685   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0914 18:13:26.888211   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.888735   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.888753   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.889060   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.889233   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.891040   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.892012   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0914 18:13:26.892352   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.892798   62554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:13:26.892812   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.892845   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.893321   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.893987   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.894040   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.894059   62554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:26.894078   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:13:26.894102   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.897218   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0914 18:13:26.897776   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.897932   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.898631   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.898669   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.899315   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.899382   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.899395   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.899557   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.899698   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.899873   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.900433   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.900668   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.902863   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.904569   62554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:13:22.104620   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.603793   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.604247   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.905708   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:13:26.905729   62554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:13:26.905755   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.910848   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911333   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.911430   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911568   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.911840   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.912025   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.912238   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.912625   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0914 18:13:26.913014   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.913653   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.913668   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.914116   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.914342   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.916119   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.916332   62554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:26.916350   62554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:13:26.916369   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.920129   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920769   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.920791   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920971   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.921170   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.921291   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.921413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:27.055184   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:13:27.072683   62554 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084289   62554 node_ready.go:49] node "embed-certs-044534" has status "Ready":"True"
	I0914 18:13:27.084317   62554 node_ready.go:38] duration metric: took 11.599354ms for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084326   62554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:27.090428   62554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:27.258854   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:27.260576   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:27.261092   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:13:27.261115   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:13:27.332882   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:13:27.332914   62554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:13:27.400159   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:27.400193   62554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:13:27.486731   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:28.164139   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164171   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164215   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164242   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164581   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164593   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164596   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164597   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164608   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164569   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164619   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164621   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164627   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164629   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164874   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164897   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164902   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164929   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164941   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196171   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.196197   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.196530   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.196590   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.509915   62554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023114908s)
	I0914 18:13:28.509973   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.509989   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510276   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510329   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510348   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510365   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.510374   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510614   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510653   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510665   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510678   62554 addons.go:475] Verifying addon metrics-server=true in "embed-certs-044534"
	I0914 18:13:28.512283   62554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:13:28.513593   62554 addons.go:510] duration metric: took 1.663035459s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:13:29.103964   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.501135   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.502181   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.605176   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.102817   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.596452   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:33.596699   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.001070   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:32.001946   63448 pod_ready.go:82] duration metric: took 4m0.00767403s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:13:32.001975   63448 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:13:32.001987   63448 pod_ready.go:39] duration metric: took 4m5.051544016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:32.002004   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:32.002037   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:32.002093   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:32.053241   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.053276   63448 cri.go:89] found id: ""
	I0914 18:13:32.053287   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:32.053349   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.057854   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:32.057921   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:32.099294   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:32.099318   63448 cri.go:89] found id: ""
	I0914 18:13:32.099328   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:32.099375   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.103674   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:32.103745   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:32.144190   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:32.144219   63448 cri.go:89] found id: ""
	I0914 18:13:32.144228   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:32.144275   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.148382   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:32.148443   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:32.185779   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:32.185807   63448 cri.go:89] found id: ""
	I0914 18:13:32.185814   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:32.185864   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.189478   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:32.189545   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:32.224657   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.224681   63448 cri.go:89] found id: ""
	I0914 18:13:32.224690   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:32.224745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.228421   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:32.228494   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:32.262491   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:32.262513   63448 cri.go:89] found id: ""
	I0914 18:13:32.262519   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:32.262579   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.266135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:32.266213   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:32.300085   63448 cri.go:89] found id: ""
	I0914 18:13:32.300111   63448 logs.go:276] 0 containers: []
	W0914 18:13:32.300119   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:32.300124   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:32.300181   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:32.335359   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:32.335379   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.335387   63448 cri.go:89] found id: ""
	I0914 18:13:32.335393   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:32.335451   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.339404   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.343173   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:32.343203   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.378987   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:32.379016   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.418829   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:32.418855   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:32.941046   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:32.941102   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.998148   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:32.998209   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:33.041208   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:33.041241   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:33.080774   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:33.080806   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:33.130519   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:33.130552   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:33.182751   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:33.182788   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:33.222008   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:33.222053   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:33.263100   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:33.263137   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:33.330307   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:33.330343   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:33.344658   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:33.344687   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:35.597157   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:35.597179   62554 pod_ready.go:82] duration metric: took 8.50672651s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:35.597189   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604147   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.604179   62554 pod_ready.go:82] duration metric: took 1.006982094s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604192   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610278   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.610302   62554 pod_ready.go:82] duration metric: took 6.101843ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610315   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615527   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.615549   62554 pod_ready.go:82] duration metric: took 5.226206ms for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615559   62554 pod_ready.go:39] duration metric: took 9.531222215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:36.615587   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:36.615642   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.630381   62554 api_server.go:72] duration metric: took 9.779851335s to wait for apiserver process to appear ...
	I0914 18:13:36.630414   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.630438   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:13:36.637559   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:13:36.639973   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:36.639999   62554 api_server.go:131] duration metric: took 9.577574ms to wait for apiserver health ...
	I0914 18:13:36.640006   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:36.647412   62554 system_pods.go:59] 9 kube-system pods found
	I0914 18:13:36.647443   62554 system_pods.go:61] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.647448   62554 system_pods.go:61] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.647452   62554 system_pods.go:61] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.647456   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.647459   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.647463   62554 system_pods.go:61] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.647465   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.647471   62554 system_pods.go:61] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.647475   62554 system_pods.go:61] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.647483   62554 system_pods.go:74] duration metric: took 7.47115ms to wait for pod list to return data ...
	I0914 18:13:36.647490   62554 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:36.650678   62554 default_sa.go:45] found service account: "default"
	I0914 18:13:36.650722   62554 default_sa.go:55] duration metric: took 3.225438ms for default service account to be created ...
	I0914 18:13:36.650733   62554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:36.656461   62554 system_pods.go:86] 9 kube-system pods found
	I0914 18:13:36.656489   62554 system_pods.go:89] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.656495   62554 system_pods.go:89] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.656499   62554 system_pods.go:89] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.656503   62554 system_pods.go:89] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.656507   62554 system_pods.go:89] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.656512   62554 system_pods.go:89] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.656516   62554 system_pods.go:89] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.656522   62554 system_pods.go:89] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.656525   62554 system_pods.go:89] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.656534   62554 system_pods.go:126] duration metric: took 5.795433ms to wait for k8s-apps to be running ...
	I0914 18:13:36.656541   62554 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:36.656586   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:36.673166   62554 system_svc.go:56] duration metric: took 16.609444ms WaitForService to wait for kubelet
	I0914 18:13:36.673205   62554 kubeadm.go:582] duration metric: took 9.822681909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:36.673227   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:36.794984   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:36.795013   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:36.795024   62554 node_conditions.go:105] duration metric: took 121.79122ms to run NodePressure ...
	I0914 18:13:36.795038   62554 start.go:241] waiting for startup goroutines ...
	I0914 18:13:36.795047   62554 start.go:246] waiting for cluster config update ...
	I0914 18:13:36.795060   62554 start.go:255] writing updated cluster config ...
	I0914 18:13:36.795406   62554 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:36.847454   62554 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:36.849605   62554 out.go:177] * Done! kubectl is now configured to use "embed-certs-044534" cluster and "default" namespace by default
	I0914 18:13:33.105197   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.604458   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.989800   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.006371   63448 api_server.go:72] duration metric: took 4m14.310539233s to wait for apiserver process to appear ...
	I0914 18:13:36.006405   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.006446   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:36.006508   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:36.044973   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:36.044992   63448 cri.go:89] found id: ""
	I0914 18:13:36.045000   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:36.045055   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.049371   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:36.049449   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:36.097114   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.097139   63448 cri.go:89] found id: ""
	I0914 18:13:36.097148   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:36.097212   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.102084   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:36.102153   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:36.140640   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.140662   63448 cri.go:89] found id: ""
	I0914 18:13:36.140671   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:36.140728   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.144624   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:36.144696   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:36.179135   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.179156   63448 cri.go:89] found id: ""
	I0914 18:13:36.179163   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:36.179216   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.183050   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:36.183110   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:36.222739   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:36.222758   63448 cri.go:89] found id: ""
	I0914 18:13:36.222765   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:36.222812   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.226715   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:36.226782   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:36.261587   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:36.261610   63448 cri.go:89] found id: ""
	I0914 18:13:36.261617   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:36.261664   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.265541   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:36.265614   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:36.301521   63448 cri.go:89] found id: ""
	I0914 18:13:36.301546   63448 logs.go:276] 0 containers: []
	W0914 18:13:36.301554   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:36.301560   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:36.301622   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:36.335332   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.335355   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.335358   63448 cri.go:89] found id: ""
	I0914 18:13:36.335365   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:36.335415   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.339542   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.343543   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:36.343570   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.384224   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:36.384259   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.428010   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:36.428041   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.469679   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:36.469708   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.507570   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:36.507597   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.543300   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:36.543335   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:36.619060   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:36.619084   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:36.633542   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:36.633572   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:36.741334   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:36.741370   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:37.231208   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:37.231255   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:37.278835   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:37.278863   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:37.320359   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:37.320399   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:37.357940   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:37.357974   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:39.913586   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:13:39.917590   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:13:39.918633   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:39.918653   63448 api_server.go:131] duration metric: took 3.912241678s to wait for apiserver health ...
	I0914 18:13:39.918660   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:39.918682   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:39.918727   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:39.961919   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:39.961947   63448 cri.go:89] found id: ""
	I0914 18:13:39.961956   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:39.962012   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:39.965756   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:39.965838   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:40.008044   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.008066   63448 cri.go:89] found id: ""
	I0914 18:13:40.008074   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:40.008117   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.012505   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:40.012569   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:40.059166   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.059194   63448 cri.go:89] found id: ""
	I0914 18:13:40.059204   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:40.059267   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.063135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:40.063197   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:40.105220   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.105245   63448 cri.go:89] found id: ""
	I0914 18:13:40.105255   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:40.105308   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.109907   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:40.109978   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:40.146307   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.146337   63448 cri.go:89] found id: ""
	I0914 18:13:40.146349   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:40.146396   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.150369   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:40.150436   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:40.185274   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.185301   63448 cri.go:89] found id: ""
	I0914 18:13:40.185312   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:40.185374   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.189425   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:40.189499   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:40.223289   63448 cri.go:89] found id: ""
	I0914 18:13:40.223311   63448 logs.go:276] 0 containers: []
	W0914 18:13:40.223319   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:40.223324   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:40.223369   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:40.257779   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.257805   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.257811   63448 cri.go:89] found id: ""
	I0914 18:13:40.257820   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:40.257880   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.262388   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.266233   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:40.266258   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:38.105234   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.604049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.310145   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:40.310188   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.358651   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:40.358686   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.398107   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:40.398144   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.450540   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:40.450573   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:40.465987   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:40.466013   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:40.573299   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:40.573333   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.618201   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:40.618247   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.671259   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:40.671304   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.708455   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:40.708488   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.746662   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:40.746696   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:41.108968   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:41.109017   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:41.150925   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:41.150968   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:43.725606   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:13:43.725642   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.725650   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.725656   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.725661   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.725665   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.725670   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.725680   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.725687   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.725699   63448 system_pods.go:74] duration metric: took 3.807031642s to wait for pod list to return data ...
	I0914 18:13:43.725710   63448 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:43.728384   63448 default_sa.go:45] found service account: "default"
	I0914 18:13:43.728409   63448 default_sa.go:55] duration metric: took 2.691817ms for default service account to be created ...
	I0914 18:13:43.728417   63448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:43.732884   63448 system_pods.go:86] 8 kube-system pods found
	I0914 18:13:43.732913   63448 system_pods.go:89] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.732918   63448 system_pods.go:89] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.732922   63448 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.732926   63448 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.732931   63448 system_pods.go:89] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.732935   63448 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.732942   63448 system_pods.go:89] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.732947   63448 system_pods.go:89] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.732954   63448 system_pods.go:126] duration metric: took 4.531761ms to wait for k8s-apps to be running ...
	I0914 18:13:43.732960   63448 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:43.733001   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:43.749535   63448 system_svc.go:56] duration metric: took 16.566498ms WaitForService to wait for kubelet
	I0914 18:13:43.749567   63448 kubeadm.go:582] duration metric: took 4m22.053742257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:43.749587   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:43.752493   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:43.752514   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:43.752523   63448 node_conditions.go:105] duration metric: took 2.931821ms to run NodePressure ...
	I0914 18:13:43.752534   63448 start.go:241] waiting for startup goroutines ...
	I0914 18:13:43.752548   63448 start.go:246] waiting for cluster config update ...
	I0914 18:13:43.752560   63448 start.go:255] writing updated cluster config ...
	I0914 18:13:43.752815   63448 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:43.803181   63448 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:43.805150   63448 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-243449" cluster and "default" namespace by default
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.103780   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:45.603666   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:47.603988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:50.104811   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:52.604411   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:55.103339   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:57.103716   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:59.603423   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:00.097180   62207 pod_ready.go:82] duration metric: took 4m0.000345486s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	E0914 18:14:00.097209   62207 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:14:00.097230   62207 pod_ready.go:39] duration metric: took 4m11.039838973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:00.097260   62207 kubeadm.go:597] duration metric: took 4m18.345876583s to restartPrimaryControlPlane
	W0914 18:14:00.097328   62207 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:14:00.097360   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:26.392001   62207 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.294613232s)
	I0914 18:14:26.392082   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:26.410558   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:14:26.421178   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:26.430786   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:26.430808   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:26.430858   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:26.440193   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:26.440253   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:26.449848   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:26.459589   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:26.459651   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:26.469556   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.478722   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:26.478782   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.488694   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:26.498478   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:26.498542   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:26.509455   62207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:26.552295   62207 kubeadm.go:310] W0914 18:14:26.530603    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.552908   62207 kubeadm.go:310] W0914 18:14:26.531307    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.665962   62207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:35.406074   62207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:14:35.406150   62207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:35.406251   62207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:35.406372   62207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:35.406503   62207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:14:35.406611   62207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:35.408167   62207 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:35.408257   62207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:35.408337   62207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:35.408451   62207 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:35.408550   62207 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:35.408655   62207 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:35.408733   62207 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:35.408823   62207 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:35.408916   62207 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:35.409022   62207 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:35.409133   62207 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:35.409176   62207 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:35.409225   62207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:35.409269   62207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:35.409328   62207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:14:35.409374   62207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:35.409440   62207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:35.409507   62207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:35.409633   62207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:35.409734   62207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:35.411984   62207 out.go:235]   - Booting up control plane ...
	I0914 18:14:35.412099   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:35.412212   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:35.412276   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:35.412371   62207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:35.412444   62207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:35.412479   62207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:35.412597   62207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:14:35.412686   62207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:14:35.412737   62207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002422188s
	I0914 18:14:35.412801   62207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:14:35.412863   62207 kubeadm.go:310] [api-check] The API server is healthy after 5.002046359s
	I0914 18:14:35.412986   62207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:14:35.413129   62207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:14:35.413208   62207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:14:35.413427   62207 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-168587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:14:35.413510   62207 kubeadm.go:310] [bootstrap-token] Using token: 2jk8ol.l80z6l7tm2nt4pl7
	I0914 18:14:35.414838   62207 out.go:235]   - Configuring RBAC rules ...
	I0914 18:14:35.414968   62207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:14:35.415069   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:14:35.415291   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:14:35.415482   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:14:35.415615   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:14:35.415725   62207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:14:35.415867   62207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:14:35.415930   62207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:14:35.415990   62207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:14:35.415999   62207 kubeadm.go:310] 
	I0914 18:14:35.416077   62207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:14:35.416086   62207 kubeadm.go:310] 
	I0914 18:14:35.416187   62207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:14:35.416198   62207 kubeadm.go:310] 
	I0914 18:14:35.416232   62207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:14:35.416314   62207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:14:35.416388   62207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:14:35.416397   62207 kubeadm.go:310] 
	I0914 18:14:35.416474   62207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:14:35.416484   62207 kubeadm.go:310] 
	I0914 18:14:35.416525   62207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:14:35.416529   62207 kubeadm.go:310] 
	I0914 18:14:35.416597   62207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:14:35.416701   62207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:14:35.416781   62207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:14:35.416796   62207 kubeadm.go:310] 
	I0914 18:14:35.416899   62207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:14:35.416998   62207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:14:35.417007   62207 kubeadm.go:310] 
	I0914 18:14:35.417125   62207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417247   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:14:35.417272   62207 kubeadm.go:310] 	--control-plane 
	I0914 18:14:35.417276   62207 kubeadm.go:310] 
	I0914 18:14:35.417399   62207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:14:35.417422   62207 kubeadm.go:310] 
	I0914 18:14:35.417530   62207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417686   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:14:35.417705   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:14:35.417713   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:14:35.420023   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:14:35.421095   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:14:35.432619   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:14:35.451720   62207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:14:35.451790   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:35.451836   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-168587 minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=no-preload-168587 minikube.k8s.io/primary=true
	I0914 18:14:35.654681   62207 ops.go:34] apiserver oom_adj: -16
	I0914 18:14:35.654714   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.155376   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.655468   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.155741   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.655416   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.154935   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.655465   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.740860   62207 kubeadm.go:1113] duration metric: took 3.289121705s to wait for elevateKubeSystemPrivileges
	I0914 18:14:38.740912   62207 kubeadm.go:394] duration metric: took 4m57.036377829s to StartCluster
	I0914 18:14:38.740939   62207 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.741029   62207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:14:38.742754   62207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.742977   62207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:14:38.743138   62207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:14:38.743260   62207 addons.go:69] Setting storage-provisioner=true in profile "no-preload-168587"
	I0914 18:14:38.743271   62207 addons.go:69] Setting default-storageclass=true in profile "no-preload-168587"
	I0914 18:14:38.743282   62207 addons.go:234] Setting addon storage-provisioner=true in "no-preload-168587"
	I0914 18:14:38.743290   62207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-168587"
	W0914 18:14:38.743295   62207 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:14:38.743306   62207 addons.go:69] Setting metrics-server=true in profile "no-preload-168587"
	I0914 18:14:38.743329   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743334   62207 addons.go:234] Setting addon metrics-server=true in "no-preload-168587"
	I0914 18:14:38.743362   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0914 18:14:38.743365   62207 addons.go:243] addon metrics-server should already be in state true
	I0914 18:14:38.743442   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743814   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743843   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743821   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.744070   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.744427   62207 out.go:177] * Verifying Kubernetes components...
	I0914 18:14:38.745716   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:14:38.760250   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0914 18:14:38.760329   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0914 18:14:38.760788   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.760810   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.761416   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761438   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761581   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761829   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.761980   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.762333   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.762445   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.762495   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.763295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0914 18:14:38.763767   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.764256   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.764285   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.764616   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.765095   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765131   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.765525   62207 addons.go:234] Setting addon default-storageclass=true in "no-preload-168587"
	W0914 18:14:38.765544   62207 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:14:38.765568   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.765798   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765837   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.782208   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0914 18:14:38.782527   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0914 18:14:38.782564   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0914 18:14:38.782679   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782943   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782973   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.783413   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783433   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783566   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783573   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783585   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783956   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.783964   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784444   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.784482   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.784639   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784666   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.784806   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.786340   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.786797   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.788188   62207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:14:38.788195   62207 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:14:38.789239   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:14:38.789254   62207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:14:38.789273   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.789338   62207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:38.789347   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:14:38.789358   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.792968   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793521   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793853   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.793894   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794037   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794097   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.794107   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794258   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794351   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794499   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794531   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794635   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794716   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.794777   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.827254   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 18:14:38.827852   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.828434   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.828460   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.828837   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.829088   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.830820   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.831031   62207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:38.831048   62207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:14:38.831067   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.833822   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834242   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.834282   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834453   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.834641   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.834794   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.834963   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.920627   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:14:38.941951   62207 node_ready.go:35] waiting up to 6m0s for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973102   62207 node_ready.go:49] node "no-preload-168587" has status "Ready":"True"
	I0914 18:14:38.973124   62207 node_ready.go:38] duration metric: took 31.146661ms for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973132   62207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:38.989712   62207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:39.018196   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:14:39.018223   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:14:39.045691   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:39.066249   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:14:39.066277   62207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:14:39.073017   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:39.118360   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.118385   62207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:14:39.195268   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.874924   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.874953   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.874950   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875004   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875398   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875406   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875457   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875466   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875476   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875406   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875430   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875598   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875609   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875631   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875914   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875916   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875934   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875939   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875959   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875966   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.929860   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.929881   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.930191   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.930211   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.139888   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.139918   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140256   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140273   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140282   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.140289   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140608   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140620   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:40.140630   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140646   62207 addons.go:475] Verifying addon metrics-server=true in "no-preload-168587"
	I0914 18:14:40.142461   62207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:14:40.143818   62207 addons.go:510] duration metric: took 1.400695696s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:14:40.996599   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:43.498584   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:45.995938   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:45.995971   62207 pod_ready.go:82] duration metric: took 7.006220602s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:45.995984   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000589   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.000609   62207 pod_ready.go:82] duration metric: took 4.618617ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000620   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004865   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.004886   62207 pod_ready.go:82] duration metric: took 4.259787ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004895   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009225   62207 pod_ready.go:93] pod "kube-proxy-xdj6b" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.009243   62207 pod_ready.go:82] duration metric: took 4.343161ms for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009250   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013312   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.013330   62207 pod_ready.go:82] duration metric: took 4.073817ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013337   62207 pod_ready.go:39] duration metric: took 7.040196066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:46.013358   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:14:46.013403   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:14:46.029881   62207 api_server.go:72] duration metric: took 7.286871398s to wait for apiserver process to appear ...
	I0914 18:14:46.029912   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:14:46.029937   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:14:46.034236   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:14:46.035287   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:14:46.035305   62207 api_server.go:131] duration metric: took 5.385499ms to wait for apiserver health ...
	I0914 18:14:46.035314   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:14:46.196765   62207 system_pods.go:59] 9 kube-system pods found
	I0914 18:14:46.196796   62207 system_pods.go:61] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196804   62207 system_pods.go:61] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196810   62207 system_pods.go:61] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.196816   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.196821   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.196824   62207 system_pods.go:61] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.196827   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.196832   62207 system_pods.go:61] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.196835   62207 system_pods.go:61] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.196842   62207 system_pods.go:74] duration metric: took 161.510322ms to wait for pod list to return data ...
	I0914 18:14:46.196853   62207 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:14:46.394399   62207 default_sa.go:45] found service account: "default"
	I0914 18:14:46.394428   62207 default_sa.go:55] duration metric: took 197.566762ms for default service account to be created ...
	I0914 18:14:46.394443   62207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:14:46.596421   62207 system_pods.go:86] 9 kube-system pods found
	I0914 18:14:46.596454   62207 system_pods.go:89] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596462   62207 system_pods.go:89] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596468   62207 system_pods.go:89] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.596473   62207 system_pods.go:89] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.596477   62207 system_pods.go:89] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.596480   62207 system_pods.go:89] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.596483   62207 system_pods.go:89] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.596502   62207 system_pods.go:89] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.596509   62207 system_pods.go:89] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.596517   62207 system_pods.go:126] duration metric: took 202.067078ms to wait for k8s-apps to be running ...
	I0914 18:14:46.596527   62207 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:14:46.596571   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:46.611796   62207 system_svc.go:56] duration metric: took 15.259464ms WaitForService to wait for kubelet
	I0914 18:14:46.611837   62207 kubeadm.go:582] duration metric: took 7.868833105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:14:46.611858   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:14:46.794731   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:14:46.794758   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:14:46.794767   62207 node_conditions.go:105] duration metric: took 182.903835ms to run NodePressure ...
	I0914 18:14:46.794777   62207 start.go:241] waiting for startup goroutines ...
	I0914 18:14:46.794783   62207 start.go:246] waiting for cluster config update ...
	I0914 18:14:46.794793   62207 start.go:255] writing updated cluster config ...
	I0914 18:14:46.795051   62207 ssh_runner.go:195] Run: rm -f paused
	I0914 18:14:46.845803   62207 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:14:46.847399   62207 out.go:177] * Done! kubectl is now configured to use "no-preload-168587" cluster and "default" namespace by default
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.487699767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338356487660836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38c6c89b-9c3f-4d0f-8ec3-f713ddda0293 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.488474653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed99a146-372b-4af0-86bf-8098450aade8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.488578172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed99a146-372b-4af0-86bf-8098450aade8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.488615637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ed99a146-372b-4af0-86bf-8098450aade8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.520426431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae8b4697-0636-4141-94b2-0558e2a05374 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.520527652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae8b4697-0636-4141-94b2-0558e2a05374 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.522074102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9de9331-2db0-4907-84de-f14482b3dfed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.522518699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338356522487414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9de9331-2db0-4907-84de-f14482b3dfed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.523110309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a72e748b-1401-4bc8-9f27-e48e80198677 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.523183446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a72e748b-1401-4bc8-9f27-e48e80198677 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.523227914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a72e748b-1401-4bc8-9f27-e48e80198677 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.555206730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=393a1041-f65c-4925-87da-04fa53f5dbb1 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.555299518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=393a1041-f65c-4925-87da-04fa53f5dbb1 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.556769006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc77a467-9595-4ca2-919b-9c9e75a7bcde name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.557239627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338356557208320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc77a467-9595-4ca2-919b-9c9e75a7bcde name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.557808096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5caaa688-1721-41e1-a521-ba621ddf65fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.557868509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5caaa688-1721-41e1-a521-ba621ddf65fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.557904907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5caaa688-1721-41e1-a521-ba621ddf65fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.591341843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dd5a29b-13c7-486e-a9dd-85e63a6f88e6 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.591436937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dd5a29b-13c7-486e-a9dd-85e63a6f88e6 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.592916565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d116464c-4f2c-4dc5-90b4-3fd26ddaaab3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.593381632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338356593358874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d116464c-4f2c-4dc5-90b4-3fd26ddaaab3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.593916563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb32015d-8032-4666-a926-557dd373e97e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.594047087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb32015d-8032-4666-a926-557dd373e97e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:25:56 old-k8s-version-556121 crio[630]: time="2024-09-14 18:25:56.594085912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eb32015d-8032-4666-a926-557dd373e97e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep14 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818277] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.926515] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.580247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.280362] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.069665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058885] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.193036] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.156845] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.249799] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.598174] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.066263] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.657757] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[Sep14 18:09] kauditd_printk_skb: 46 callbacks suppressed
	[Sep14 18:12] systemd-fstab-generator[5028]: Ignoring "noauto" option for root device
	[Sep14 18:14] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.068697] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:25:56 up 17 min,  0 users,  load average: 0.03, 0.08, 0.05
	Linux old-k8s-version-556121 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc0004b3800, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: net.cgoIPLookup(0xc0001b6e40, 0x48ab5d6, 0x3, 0xc0004b3800, 0x1f)
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: created by net.cgoLookupIP
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: goroutine 110 [runnable]:
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000cd5090, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000188180, 0x0, 0x0)
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008b2e00)
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 14 18:25:51 old-k8s-version-556121 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 14 18:25:51 old-k8s-version-556121 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 14 18:25:51 old-k8s-version-556121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 14 18:25:51 old-k8s-version-556121 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 14 18:25:51 old-k8s-version-556121 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6496]: I0914 18:25:51.985719    6496 server.go:416] Version: v1.20.0
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6496]: I0914 18:25:51.986081    6496 server.go:837] Client rotation is on, will bootstrap in background
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6496]: I0914 18:25:51.988132    6496 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6496]: I0914 18:25:51.989162    6496 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 14 18:25:51 old-k8s-version-556121 kubelet[6496]: W0914 18:25:51.989311    6496 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (221.634323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-556121" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (442.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-044534 -n embed-certs-044534
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-14 18:30:01.741582356 +0000 UTC m=+6376.263316323
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-044534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-044534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.75µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-044534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-044534 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-044534 logs -n 25: (1.372253135s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:28 UTC | 14 Sep 24 18:28 UTC |
	| start   | -p newest-cni-019918 --memory=2200 --alsologtostderr   | newest-cni-019918            | jenkins | v1.34.0 | 14 Sep 24 18:28 UTC | 14 Sep 24 18:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:29 UTC | 14 Sep 24 18:29 UTC |
	| start   | -p auto-691590 --memory=3072                           | auto-691590                  | jenkins | v1.34.0 | 14 Sep 24 18:29 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-019918             | newest-cni-019918            | jenkins | v1.34.0 | 14 Sep 24 18:29 UTC | 14 Sep 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-019918                                   | newest-cni-019918            | jenkins | v1.34.0 | 14 Sep 24 18:29 UTC | 14 Sep 24 18:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-019918                  | newest-cni-019918            | jenkins | v1.34.0 | 14 Sep 24 18:29 UTC | 14 Sep 24 18:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-019918 --memory=2200 --alsologtostderr   | newest-cni-019918            | jenkins | v1.34.0 | 14 Sep 24 18:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:29:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:29:50.078982   70640 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:29:50.079097   70640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:29:50.079107   70640 out.go:358] Setting ErrFile to fd 2...
	I0914 18:29:50.079112   70640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:29:50.079297   70640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:29:50.079868   70640 out.go:352] Setting JSON to false
	I0914 18:29:50.080733   70640 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7934,"bootTime":1726330656,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:29:50.080821   70640 start.go:139] virtualization: kvm guest
	I0914 18:29:50.083070   70640 out.go:177] * [newest-cni-019918] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:29:50.084497   70640 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:29:50.084496   70640 notify.go:220] Checking for updates...
	I0914 18:29:50.085847   70640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:29:50.087219   70640 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:29:50.088481   70640 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:29:50.090014   70640 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:29:50.091409   70640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:29:50.093115   70640 config.go:182] Loaded profile config "newest-cni-019918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:29:50.093529   70640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:29:50.093583   70640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:29:50.109012   70640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I0914 18:29:50.109516   70640 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:29:50.110094   70640 main.go:141] libmachine: Using API Version  1
	I0914 18:29:50.110116   70640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:29:50.110456   70640 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:29:50.110660   70640 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:50.110928   70640 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:29:50.111235   70640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:29:50.111271   70640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:29:50.126946   70640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0914 18:29:50.127378   70640 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:29:50.127819   70640 main.go:141] libmachine: Using API Version  1
	I0914 18:29:50.127843   70640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:29:50.128143   70640 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:29:50.128308   70640 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:50.166281   70640 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:29:50.167856   70640 start.go:297] selected driver: kvm2
	I0914 18:29:50.167872   70640 start.go:901] validating driver "kvm2" against &{Name:newest-cni-019918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-019918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:29:50.168001   70640 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:29:50.168840   70640 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:29:50.168926   70640 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:29:50.184471   70640 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:29:50.184898   70640 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 18:29:50.184931   70640 cni.go:84] Creating CNI manager for ""
	I0914 18:29:50.184969   70640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:29:50.185007   70640 start.go:340] cluster config:
	{Name:newest-cni-019918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-019918 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:29:50.185106   70640 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:29:50.188090   70640 out.go:177] * Starting "newest-cni-019918" primary control-plane node in "newest-cni-019918" cluster
	I0914 18:29:45.782983   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:45.783465   70315 main.go:141] libmachine: (auto-691590) DBG | unable to find current IP address of domain auto-691590 in network mk-auto-691590
	I0914 18:29:45.783491   70315 main.go:141] libmachine: (auto-691590) DBG | I0914 18:29:45.783420   70338 retry.go:31] will retry after 2.6561084s: waiting for machine to come up
	I0914 18:29:48.440905   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:48.441414   70315 main.go:141] libmachine: (auto-691590) DBG | unable to find current IP address of domain auto-691590 in network mk-auto-691590
	I0914 18:29:48.441435   70315 main.go:141] libmachine: (auto-691590) DBG | I0914 18:29:48.441375   70338 retry.go:31] will retry after 3.42445286s: waiting for machine to come up
	I0914 18:29:50.189728   70640 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:29:50.189786   70640 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:29:50.189796   70640 cache.go:56] Caching tarball of preloaded images
	I0914 18:29:50.189878   70640 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:29:50.189890   70640 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:29:50.189998   70640 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/config.json ...
	I0914 18:29:50.190210   70640 start.go:360] acquireMachinesLock for newest-cni-019918: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:29:51.867241   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:51.867670   70315 main.go:141] libmachine: (auto-691590) DBG | unable to find current IP address of domain auto-691590 in network mk-auto-691590
	I0914 18:29:51.867696   70315 main.go:141] libmachine: (auto-691590) DBG | I0914 18:29:51.867623   70338 retry.go:31] will retry after 4.054499416s: waiting for machine to come up
	I0914 18:29:57.418968   70640 start.go:364] duration metric: took 7.228676819s to acquireMachinesLock for "newest-cni-019918"
	I0914 18:29:57.419012   70640 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:29:57.419024   70640 fix.go:54] fixHost starting: 
	I0914 18:29:57.419395   70640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:29:57.419438   70640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:29:57.439279   70640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0914 18:29:57.439806   70640 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:29:57.440350   70640 main.go:141] libmachine: Using API Version  1
	I0914 18:29:57.440371   70640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:29:57.440755   70640 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:29:57.440973   70640 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:57.441146   70640 main.go:141] libmachine: (newest-cni-019918) Calling .GetState
	I0914 18:29:57.442988   70640 fix.go:112] recreateIfNeeded on newest-cni-019918: state=Stopped err=<nil>
	I0914 18:29:57.443021   70640 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	W0914 18:29:57.443162   70640 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:29:57.445425   70640 out.go:177] * Restarting existing kvm2 VM for "newest-cni-019918" ...
	I0914 18:29:55.923911   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:55.924421   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has current primary IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:55.924438   70315 main.go:141] libmachine: (auto-691590) Found IP for machine: 192.168.39.217
	I0914 18:29:55.924449   70315 main.go:141] libmachine: (auto-691590) Reserving static IP address...
	I0914 18:29:55.924910   70315 main.go:141] libmachine: (auto-691590) DBG | unable to find host DHCP lease matching {name: "auto-691590", mac: "52:54:00:0d:6c:b4", ip: "192.168.39.217"} in network mk-auto-691590
	I0914 18:29:56.016395   70315 main.go:141] libmachine: (auto-691590) DBG | Getting to WaitForSSH function...
	I0914 18:29:56.016449   70315 main.go:141] libmachine: (auto-691590) Reserved static IP address: 192.168.39.217
	I0914 18:29:56.016466   70315 main.go:141] libmachine: (auto-691590) Waiting for SSH to be available...
	I0914 18:29:56.019410   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.019819   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.019846   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.020012   70315 main.go:141] libmachine: (auto-691590) DBG | Using SSH client type: external
	I0914 18:29:56.020033   70315 main.go:141] libmachine: (auto-691590) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/auto-691590/id_rsa (-rw-------)
	I0914 18:29:56.020067   70315 main.go:141] libmachine: (auto-691590) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/auto-691590/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:29:56.020078   70315 main.go:141] libmachine: (auto-691590) DBG | About to run SSH command:
	I0914 18:29:56.020094   70315 main.go:141] libmachine: (auto-691590) DBG | exit 0
	I0914 18:29:56.150308   70315 main.go:141] libmachine: (auto-691590) DBG | SSH cmd err, output: <nil>: 
	I0914 18:29:56.150596   70315 main.go:141] libmachine: (auto-691590) KVM machine creation complete!
	I0914 18:29:56.150894   70315 main.go:141] libmachine: (auto-691590) Calling .GetConfigRaw
	I0914 18:29:56.151492   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:56.151727   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:56.151973   70315 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 18:29:56.151989   70315 main.go:141] libmachine: (auto-691590) Calling .GetState
	I0914 18:29:56.153241   70315 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 18:29:56.153254   70315 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 18:29:56.153270   70315 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 18:29:56.153275   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.156209   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.156590   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.156627   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.156798   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:56.156995   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.157135   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.157266   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:56.157403   70315 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:56.157587   70315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0914 18:29:56.157597   70315 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 18:29:56.265674   70315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:29:56.265700   70315 main.go:141] libmachine: Detecting the provisioner...
	I0914 18:29:56.265711   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.268927   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.269362   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.269392   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.269568   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:56.269780   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.269921   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.270069   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:56.270252   70315 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:56.270455   70315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0914 18:29:56.270470   70315 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 18:29:56.382864   70315 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 18:29:56.382971   70315 main.go:141] libmachine: found compatible host: buildroot
	I0914 18:29:56.382982   70315 main.go:141] libmachine: Provisioning with buildroot...
	I0914 18:29:56.382989   70315 main.go:141] libmachine: (auto-691590) Calling .GetMachineName
	I0914 18:29:56.383223   70315 buildroot.go:166] provisioning hostname "auto-691590"
	I0914 18:29:56.383246   70315 main.go:141] libmachine: (auto-691590) Calling .GetMachineName
	I0914 18:29:56.383470   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.386026   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.386420   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.386449   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.386584   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:56.386755   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.386885   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.386985   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:56.387160   70315 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:56.387375   70315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0914 18:29:56.387388   70315 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-691590 && echo "auto-691590" | sudo tee /etc/hostname
	I0914 18:29:56.517174   70315 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-691590
	
	I0914 18:29:56.517211   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.520351   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.520839   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.520867   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.521083   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:56.521309   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.521498   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.521656   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:56.521849   70315 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:56.522049   70315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0914 18:29:56.522066   70315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-691590' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-691590/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-691590' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:29:56.639766   70315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:29:56.639795   70315 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:29:56.639852   70315 buildroot.go:174] setting up certificates
	I0914 18:29:56.639866   70315 provision.go:84] configureAuth start
	I0914 18:29:56.639876   70315 main.go:141] libmachine: (auto-691590) Calling .GetMachineName
	I0914 18:29:56.640258   70315 main.go:141] libmachine: (auto-691590) Calling .GetIP
	I0914 18:29:56.642902   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.643260   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.643288   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.643479   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.646510   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.647002   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.647031   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.647176   70315 provision.go:143] copyHostCerts
	I0914 18:29:56.647241   70315 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:29:56.647251   70315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:29:56.647337   70315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:29:56.647466   70315 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:29:56.647479   70315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:29:56.647514   70315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:29:56.647581   70315 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:29:56.647589   70315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:29:56.647612   70315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:29:56.647672   70315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.auto-691590 san=[127.0.0.1 192.168.39.217 auto-691590 localhost minikube]
	I0914 18:29:56.778472   70315 provision.go:177] copyRemoteCerts
	I0914 18:29:56.778536   70315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:29:56.778560   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.781488   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.781827   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.781851   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.782074   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:56.782280   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.782426   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:56.782550   70315 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/auto-691590/id_rsa Username:docker}
	I0914 18:29:56.867880   70315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:29:56.891830   70315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0914 18:29:56.915986   70315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:29:56.939045   70315 provision.go:87] duration metric: took 299.166994ms to configureAuth
	I0914 18:29:56.939071   70315 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:29:56.939235   70315 config.go:182] Loaded profile config "auto-691590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:29:56.939329   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:56.942115   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.942485   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:56.942513   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:56.942731   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:56.942922   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.943122   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:56.943263   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:56.943443   70315 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:56.943620   70315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0914 18:29:56.943641   70315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:29:57.174035   70315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:29:57.174066   70315 main.go:141] libmachine: Checking connection to Docker...
	I0914 18:29:57.174075   70315 main.go:141] libmachine: (auto-691590) Calling .GetURL
	I0914 18:29:57.175223   70315 main.go:141] libmachine: (auto-691590) DBG | Using libvirt version 6000000
	I0914 18:29:57.177607   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.177923   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.177944   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.178127   70315 main.go:141] libmachine: Docker is up and running!
	I0914 18:29:57.178142   70315 main.go:141] libmachine: Reticulating splines...
	I0914 18:29:57.178149   70315 client.go:171] duration metric: took 21.793372022s to LocalClient.Create
	I0914 18:29:57.178188   70315 start.go:167] duration metric: took 21.79345207s to libmachine.API.Create "auto-691590"
	I0914 18:29:57.178200   70315 start.go:293] postStartSetup for "auto-691590" (driver="kvm2")
	I0914 18:29:57.178212   70315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:29:57.178234   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:57.178460   70315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:29:57.178485   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:57.180827   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.181131   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.181156   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.181306   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:57.181467   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:57.181573   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:57.181672   70315 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/auto-691590/id_rsa Username:docker}
	I0914 18:29:57.264298   70315 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:29:57.268169   70315 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:29:57.268193   70315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:29:57.268261   70315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:29:57.268344   70315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:29:57.268428   70315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:29:57.277619   70315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:29:57.303027   70315 start.go:296] duration metric: took 124.812763ms for postStartSetup
	I0914 18:29:57.303081   70315 main.go:141] libmachine: (auto-691590) Calling .GetConfigRaw
	I0914 18:29:57.303756   70315 main.go:141] libmachine: (auto-691590) Calling .GetIP
	I0914 18:29:57.306624   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.307027   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.307064   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.307374   70315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/auto-691590/config.json ...
	I0914 18:29:57.307570   70315 start.go:128] duration metric: took 21.942927161s to createHost
	I0914 18:29:57.307592   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:57.309854   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.310151   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.310193   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.310367   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:57.310529   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:57.310711   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:57.310862   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:57.311017   70315 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:57.311228   70315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0914 18:29:57.311242   70315 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:29:57.418768   70315 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726338597.390732031
	
	I0914 18:29:57.418798   70315 fix.go:216] guest clock: 1726338597.390732031
	I0914 18:29:57.418809   70315 fix.go:229] Guest: 2024-09-14 18:29:57.390732031 +0000 UTC Remote: 2024-09-14 18:29:57.307581728 +0000 UTC m=+22.057238335 (delta=83.150303ms)
	I0914 18:29:57.418839   70315 fix.go:200] guest clock delta is within tolerance: 83.150303ms
	I0914 18:29:57.418850   70315 start.go:83] releasing machines lock for "auto-691590", held for 22.054285714s
	I0914 18:29:57.418884   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:57.419162   70315 main.go:141] libmachine: (auto-691590) Calling .GetIP
	I0914 18:29:57.422030   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.422566   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.422594   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.422780   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:57.423248   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:57.423422   70315 main.go:141] libmachine: (auto-691590) Calling .DriverName
	I0914 18:29:57.423528   70315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:29:57.423586   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:57.423649   70315 ssh_runner.go:195] Run: cat /version.json
	I0914 18:29:57.423671   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHHostname
	I0914 18:29:57.426175   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.426521   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.426677   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.426706   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.426944   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:57.426967   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:57.427016   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:57.427137   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:57.427255   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHPort
	I0914 18:29:57.427335   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:57.427403   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHKeyPath
	I0914 18:29:57.427458   70315 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/auto-691590/id_rsa Username:docker}
	I0914 18:29:57.427509   70315 main.go:141] libmachine: (auto-691590) Calling .GetSSHUsername
	I0914 18:29:57.427631   70315 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/auto-691590/id_rsa Username:docker}
	I0914 18:29:57.549859   70315 ssh_runner.go:195] Run: systemctl --version
	I0914 18:29:57.555892   70315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:29:57.716192   70315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:29:57.721783   70315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:29:57.721850   70315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:29:57.737669   70315 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:29:57.737697   70315 start.go:495] detecting cgroup driver to use...
	I0914 18:29:57.737764   70315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:29:57.753913   70315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:29:57.767704   70315 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:29:57.767784   70315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:29:57.782537   70315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:29:57.796787   70315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:29:57.915520   70315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:29:58.078051   70315 docker.go:233] disabling docker service ...
	I0914 18:29:58.078129   70315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:29:58.093949   70315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:29:58.108744   70315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:29:58.249835   70315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:29:58.390539   70315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:29:58.404297   70315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:29:58.423859   70315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:29:58.423923   70315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.435309   70315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:29:58.435375   70315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.447051   70315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.457830   70315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.473682   70315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:29:58.484900   70315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.495829   70315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.516010   70315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:58.526757   70315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:29:58.536757   70315 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:29:58.536832   70315 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:29:58.550833   70315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:29:58.560774   70315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:29:58.700398   70315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:29:58.808675   70315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:29:58.808758   70315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:29:58.813553   70315 start.go:563] Will wait 60s for crictl version
	I0914 18:29:58.813609   70315 ssh_runner.go:195] Run: which crictl
	I0914 18:29:58.817637   70315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:29:58.862062   70315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:29:58.862150   70315 ssh_runner.go:195] Run: crio --version
	I0914 18:29:58.893348   70315 ssh_runner.go:195] Run: crio --version
	I0914 18:29:58.926707   70315 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:29:57.446713   70640 main.go:141] libmachine: (newest-cni-019918) Calling .Start
	I0914 18:29:57.446972   70640 main.go:141] libmachine: (newest-cni-019918) Ensuring networks are active...
	I0914 18:29:57.447973   70640 main.go:141] libmachine: (newest-cni-019918) Ensuring network default is active
	I0914 18:29:57.448464   70640 main.go:141] libmachine: (newest-cni-019918) Ensuring network mk-newest-cni-019918 is active
	I0914 18:29:57.449090   70640 main.go:141] libmachine: (newest-cni-019918) Getting domain xml...
	I0914 18:29:57.449922   70640 main.go:141] libmachine: (newest-cni-019918) Creating domain...
	I0914 18:29:58.775860   70640 main.go:141] libmachine: (newest-cni-019918) Waiting to get IP...
	I0914 18:29:58.776717   70640 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:58.777190   70640 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:58.777269   70640 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:58.777182   70724 retry.go:31] will retry after 298.251328ms: waiting for machine to come up
	I0914 18:29:59.077505   70640 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:59.078021   70640 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:59.078052   70640 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:59.077946   70724 retry.go:31] will retry after 379.68915ms: waiting for machine to come up
	I0914 18:29:59.459337   70640 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:59.459908   70640 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:59.459936   70640 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:59.459857   70724 retry.go:31] will retry after 317.775198ms: waiting for machine to come up
	I0914 18:29:59.779431   70640 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:59.779946   70640 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:59.779979   70640 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:59.779884   70724 retry.go:31] will retry after 415.750651ms: waiting for machine to come up
	I0914 18:29:58.928034   70315 main.go:141] libmachine: (auto-691590) Calling .GetIP
	I0914 18:29:58.931372   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:58.931799   70315 main.go:141] libmachine: (auto-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:6c:b4", ip: ""} in network mk-auto-691590: {Iface:virbr2 ExpiryTime:2024-09-14 19:29:49 +0000 UTC Type:0 Mac:52:54:00:0d:6c:b4 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:auto-691590 Clientid:01:52:54:00:0d:6c:b4}
	I0914 18:29:58.931828   70315 main.go:141] libmachine: (auto-691590) DBG | domain auto-691590 has defined IP address 192.168.39.217 and MAC address 52:54:00:0d:6c:b4 in network mk-auto-691590
	I0914 18:29:58.932057   70315 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:29:58.936469   70315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:29:58.949480   70315 kubeadm.go:883] updating cluster {Name:auto-691590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:auto-691590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:29:58.949645   70315 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:29:58.949705   70315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:29:58.984691   70315 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:29:58.984756   70315 ssh_runner.go:195] Run: which lz4
	I0914 18:29:58.988683   70315 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:29:58.992979   70315 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:29:58.993021   70315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	
	
	==> CRI-O <==
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.433332048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338602433288397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eeb77992-d64c-4235-879c-65fca050f56d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.434225569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33f5f637-55c9-49da-ad2b-7245174e5941 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.434320197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33f5f637-55c9-49da-ad2b-7245174e5941 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.434659686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33f5f637-55c9-49da-ad2b-7245174e5941 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.488342649Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=420cdee9-25f1-4352-90ab-cf910e12c5d1 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.488477978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=420cdee9-25f1-4352-90ab-cf910e12c5d1 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.489707524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f7fba1e-06ca-482e-ac90-b304e5c92804 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.490348886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338602490318867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f7fba1e-06ca-482e-ac90-b304e5c92804 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.491343607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27323b91-a646-4657-be1b-86695fa566f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.491457676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27323b91-a646-4657-be1b-86695fa566f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.491736898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27323b91-a646-4657-be1b-86695fa566f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.537863983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20273c03-704e-422e-80d5-9cbdd9a56530 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.538045585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20273c03-704e-422e-80d5-9cbdd9a56530 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.539500635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=903a5834-aee4-4b5f-8e6f-3bfaddfc2216 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.539933015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338602539908869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=903a5834-aee4-4b5f-8e6f-3bfaddfc2216 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.540681262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ded9d7de-f5d4-4caa-a7a1-37d18f02b85d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.540755914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ded9d7de-f5d4-4caa-a7a1-37d18f02b85d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.540962162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ded9d7de-f5d4-4caa-a7a1-37d18f02b85d name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.585725297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29eff9cd-d7a2-4d29-9d49-a839efe26f86 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.585797785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29eff9cd-d7a2-4d29-9d49-a839efe26f86 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.586960306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27d4c690-d726-49cf-8fa7-54af592135ab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.587472524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338602587446469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27d4c690-d726-49cf-8fa7-54af592135ab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.588197903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe7d7d1d-0718-4f30-8ad9-d07ae3414a19 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.588262355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe7d7d1d-0718-4f30-8ad9-d07ae3414a19 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:30:02 embed-certs-044534 crio[708]: time="2024-09-14 18:30:02.588457466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7,PodSandboxId:66f03c0f1657cf4d703f0f5390d3c9c4eafc7439c6913eb6d8fc6a05c90b5593,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337608938220205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec7a14c-b6f7-464f-86b3-5f7d8063d8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616,PodSandboxId:016fce22989ba2ae3d83017b4759e956f77b59d76ff0b39fcf327d2d55b27c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608666749667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9j6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87c28a4b-015e-46b8-a462-9dc6ed06d914,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819,PodSandboxId:2d6b3768f9a201ea441754de60fdcab1ca0a67fec5228ddce39f64fd822f9e30,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337608635114862,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-67dsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
80fe6ea-838d-4048-9893-947d43e7b5c9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a,PodSandboxId:3d14f1eab7b0abca7284d913d74042ac558e42f5dbcff4ce6d97397c44316979,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726337607996316063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26fx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb48201-6caf-4787-9e27-a55885a8ae2a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731,PodSandboxId:e9867cb9d893b8f2dc536328cea646f82046121c7c6744c6781bed6b3a474169,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337597029475608,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d2b550cd716e95b332bdb65907bdbd9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a,PodSandboxId:0cfe17366efed733539406b2369fa606059fda4d5a13646c03bb5dddc874943a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337597014443666,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5a2fe77fb890757a786aa9dbe2a0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9,PodSandboxId:a1ddfd902a1565bdee0e0bd82703b80409e823af9c1cc583800b57735628d8ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337596969697094,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d,PodSandboxId:046b0a9a021643e1b6277c8704a9c9c5bd1617bb243b63187506f291730f193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337596935665198,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3130078de51e006954a7a6d2abf41ca0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9,PodSandboxId:06882a4abff9d7941f8588834b6d83bf2e7880ac74e12f4566ce3687118c4687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337312905613730,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-044534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ac7aebee65c9fd4b6b78f515e5f745,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe7d7d1d-0718-4f30-8ad9-d07ae3414a19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b14b9a711037       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   66f03c0f1657c       storage-provisioner
	b95b9d14a3861       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   016fce22989ba       coredns-7c65d6cfc9-9j6sv
	de161a601677d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   2d6b3768f9a20       coredns-7c65d6cfc9-67dsl
	40119c8929f7c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   3d14f1eab7b0a       kube-proxy-26fx6
	0ccdf8eadda64       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   e9867cb9d893b       etcd-embed-certs-044534
	981d6d37d393f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   0cfe17366efed       kube-scheduler-embed-certs-044534
	5752f872d26f4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   a1ddfd902a156       kube-apiserver-embed-certs-044534
	d03e829cc4e30       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   046b0a9a02164       kube-controller-manager-embed-certs-044534
	bdca84f70f074       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   06882a4abff9d       kube-apiserver-embed-certs-044534
	
	
	==> coredns [b95b9d14a386180ebaaf2e7d55a6720a2e06f5e3f48326dbbdc20cca60094616] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [de161a601677d26aab41de3b70f5c946ce5ad13539a9cc8c4e9f7fc6c7010819] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-044534
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-044534
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=embed-certs-044534
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:13:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-044534
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:30:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:28:51 +0000   Sat, 14 Sep 2024 18:13:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:28:51 +0000   Sat, 14 Sep 2024 18:13:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:28:51 +0000   Sat, 14 Sep 2024 18:13:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:28:51 +0000   Sat, 14 Sep 2024 18:13:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.126
	  Hostname:    embed-certs-044534
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa46e8db94cc40c4b0205a2a3853f385
	  System UUID:                fa46e8db-94cc-40c4-b020-5a2a3853f385
	  Boot ID:                    f5ab6040-5102-4ce0-acbf-20cfd0e231bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-67dsl                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-9j6sv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-044534                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-044534             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-044534    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-26fx6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-044534             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-rrfnt               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-044534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-044534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-044534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-044534 event: Registered Node embed-certs-044534 in Controller
	
	
	==> dmesg <==
	[  +0.051128] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036985] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.772461] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.959939] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579698] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.245212] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.062058] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078736] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.195418] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.127527] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.293220] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[  +4.057528] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.006304] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.064752] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.545228] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.949076] kauditd_printk_skb: 85 callbacks suppressed
	[Sep14 18:13] systemd-fstab-generator[2570]: Ignoring "noauto" option for root device
	[  +0.065962] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.985638] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +0.085570] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.783605] systemd-fstab-generator[3019]: Ignoring "noauto" option for root device
	[  +0.785968] kauditd_printk_skb: 34 callbacks suppressed
	[Sep14 18:14] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [0ccdf8eadda64b6b6babfa42af58c1d94c37998555c90afafb4e1e937fc7c731] <==
	{"level":"info","ts":"2024-09-14T18:13:18.270062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 received MsgVoteResp from 1031fe77cc914812 at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.270107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1031fe77cc914812 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.270140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1031fe77cc914812 elected leader 1031fe77cc914812 at term 2"}
	{"level":"info","ts":"2024-09-14T18:13:18.271572Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1031fe77cc914812","local-member-attributes":"{Name:embed-certs-044534 ClientURLs:[https://192.168.50.126:2379]}","request-path":"/0/members/1031fe77cc914812/attributes","cluster-id":"4ddc981c9374e971","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T18:13:18.271788Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.271921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:13:18.272586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T18:13:18.272656Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T18:13:18.272719Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4ddc981c9374e971","local-member-id":"1031fe77cc914812","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.272834Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.272879Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T18:13:18.272916Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T18:13:18.273724Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:13:18.274558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.126:2379"}
	{"level":"info","ts":"2024-09-14T18:13:18.280661Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:13:18.281459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T18:23:18.318673Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2024-09-14T18:23:18.328157Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":681,"took":"8.787723ms","hash":3996644172,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-14T18:23:18.328261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3996644172,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T18:28:18.324736Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-09-14T18:28:18.328870Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":925,"took":"3.731464ms","hash":1777990685,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-14T18:28:18.328921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1777990685,"revision":925,"compact-revision":681}
	{"level":"info","ts":"2024-09-14T18:29:24.345380Z","caller":"traceutil/trace.go:171","msg":"trace[1371644983] transaction","detail":"{read_only:false; response_revision:1224; number_of_response:1; }","duration":"262.533428ms","start":"2024-09-14T18:29:24.082799Z","end":"2024-09-14T18:29:24.345333Z","steps":["trace[1371644983] 'process raft request'  (duration: 262.350916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:29:24.595845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.137759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:29:24.596535Z","caller":"traceutil/trace.go:171","msg":"trace[683834255] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1224; }","duration":"142.912156ms","start":"2024-09-14T18:29:24.453599Z","end":"2024-09-14T18:29:24.596511Z","steps":["trace[683834255] 'range keys from in-memory index tree'  (duration: 142.064094ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:30:02 up 21 min,  0 users,  load average: 0.08, 0.14, 0.17
	Linux embed-certs-044534 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5752f872d26f493621e8280eb584628fc158fc7d4ee8a9bb67089f1dceda4fb9] <==
	I0914 18:26:20.663327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:26:20.663359       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:28:19.661835       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:28:19.662204       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:28:20.664792       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:28:20.664893       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:28:20.664961       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:28:20.665060       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:28:20.666011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:28:20.667168       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:29:20.666651       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:20.666890       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:29:20.668194       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:20.668249       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:29:20.668292       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:29:20.669456       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bdca84f70f07469a54bb47f68b5986eebf504d6277d68e4f03900b0a5335e0d9] <==
	W0914 18:13:12.564091       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.664764       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.685765       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.785670       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.826266       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.871831       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.891300       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:12.982121       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.009797       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.055644       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.073390       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.076776       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.097716       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.130255       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.154393       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.218489       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.285422       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.355212       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.444413       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.544881       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.555482       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.566884       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.685456       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:13.992142       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:13:14.676904       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d03e829cc4e307f424a17ddbab91a71a7239034ddbd590147c65613b14c9843d] <==
	E0914 18:24:56.730886       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:24:57.263443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:25:26.737835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:25:27.272637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:25:56.744311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:25:57.289791       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:26:26.752642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:26:27.298560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:26:56.759431       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:26:57.308324       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:27:26.766619       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:27:27.317499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:27:56.773791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:27:57.335187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:28:26.780641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:28:27.342853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:28:51.958836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-044534"
	E0914 18:28:56.788191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:28:57.351572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:29:26.803887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:29:27.361123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:29:48.399704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="250.326µs"
	E0914 18:29:56.811020       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:29:57.381710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:30:00.405172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="94.048µs"
	
	
	==> kube-proxy [40119c8929f7cbf9b816df426f17ac2164e896a437e012d83edc7580e923953a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 18:13:28.911596       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 18:13:28.998254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.126"]
	E0914 18:13:28.998344       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 18:13:29.128867       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 18:13:29.129043       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 18:13:29.129070       1 server_linux.go:169] "Using iptables Proxier"
	I0914 18:13:29.133612       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 18:13:29.133941       1 server.go:483] "Version info" version="v1.31.1"
	I0914 18:13:29.133969       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:13:29.135773       1 config.go:199] "Starting service config controller"
	I0914 18:13:29.135834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 18:13:29.135883       1 config.go:105] "Starting endpoint slice config controller"
	I0914 18:13:29.135889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 18:13:29.137505       1 config.go:328] "Starting node config controller"
	I0914 18:13:29.137581       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 18:13:29.236342       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 18:13:29.236405       1 shared_informer.go:320] Caches are synced for service config
	I0914 18:13:29.237797       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [981d6d37d393f75c48cd019ff9f6aaee770530225cb6d7e9a9024ea0b992119a] <==
	W0914 18:13:19.718559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:13:19.722146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:19.718590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:13:19.722229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:19.718646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:13:19.722290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.533080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:13:20.533119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.568599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:13:20.568656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.589321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:13:20.589524       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.744197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:13:20.744374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.844248       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:13:20.845514       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 18:13:20.956587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:13:20.956747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.959907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 18:13:20.960083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:20.981948       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:13:20.982197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:13:21.014514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:13:21.014728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 18:13:23.108805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 18:29:07 embed-certs-044534 kubelet[2900]: E0914 18:29:07.381692    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:29:12 embed-certs-044534 kubelet[2900]: E0914 18:29:12.661493    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338552661072501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:12 embed-certs-044534 kubelet[2900]: E0914 18:29:12.661892    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338552661072501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:21 embed-certs-044534 kubelet[2900]: E0914 18:29:21.382671    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]: E0914 18:29:22.403175    2900 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]: E0914 18:29:22.663864    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338562663089992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:22 embed-certs-044534 kubelet[2900]: E0914 18:29:22.664050    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338562663089992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:32 embed-certs-044534 kubelet[2900]: E0914 18:29:32.666695    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338572666194992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:32 embed-certs-044534 kubelet[2900]: E0914 18:29:32.666745    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338572666194992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:35 embed-certs-044534 kubelet[2900]: E0914 18:29:35.398971    2900 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 18:29:35 embed-certs-044534 kubelet[2900]: E0914 18:29:35.399350    2900 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 14 18:29:35 embed-certs-044534 kubelet[2900]: E0914 18:29:35.399609    2900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76lkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-rrfnt_kube-system(a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 14 18:29:35 embed-certs-044534 kubelet[2900]: E0914 18:29:35.400909    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:29:42 embed-certs-044534 kubelet[2900]: E0914 18:29:42.669918    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338582669218690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:42 embed-certs-044534 kubelet[2900]: E0914 18:29:42.670250    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338582669218690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:48 embed-certs-044534 kubelet[2900]: E0914 18:29:48.381230    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:29:52 embed-certs-044534 kubelet[2900]: E0914 18:29:52.672299    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338592671816010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:52 embed-certs-044534 kubelet[2900]: E0914 18:29:52.672662    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338592671816010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:30:00 embed-certs-044534 kubelet[2900]: E0914 18:30:00.381887    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rrfnt" podUID="a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0"
	Sep 14 18:30:02 embed-certs-044534 kubelet[2900]: E0914 18:30:02.675142    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338602674689811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:30:02 embed-certs-044534 kubelet[2900]: E0914 18:30:02.675168    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338602674689811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3b14b9a711037df8e42120c5beb191e62b824ee3f02aed0ec4de6d1a920a4ee7] <==
	I0914 18:13:29.146841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:13:29.157570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:13:29.157723       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:13:29.169588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:13:29.170835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-044534_7c3d1e0d-2f84-4629-82f0-d1eff9a375d1!
	I0914 18:13:29.172796       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e386ff4-ba65-44bc-ad68-ca726a1bd2ed", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-044534_7c3d1e0d-2f84-4629-82f0-d1eff9a375d1 became leader
	I0914 18:13:29.271496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-044534_7c3d1e0d-2f84-4629-82f0-d1eff9a375d1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-044534 -n embed-certs-044534
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-044534 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rrfnt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-044534 describe pod metrics-server-6867b74b74-rrfnt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-044534 describe pod metrics-server-6867b74b74-rrfnt: exit status 1 (80.839369ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rrfnt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-044534 describe pod metrics-server-6867b74b74-rrfnt: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (442.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-14 18:31:48.652903005 +0000 UTC m=+6483.174636979
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-243449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (65.910237ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-243449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-243449 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-243449 logs -n 25: (1.573898542s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-691590 sudo cat                              | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo cat                              | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo                                  | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo systemctl                        | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo systemctl                        | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo cat                              | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo cat                              | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo containerd                       | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo systemctl                        | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo systemctl                        | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo find                             | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-691590 sudo crio                             | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-691590                                       | auto-691590           | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	| start   | -p custom-flannel-691590                             | custom-flannel-691590 | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 pgrep -a                           | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo cat                           | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo cat                           | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo cat                           | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo crictl                        | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo crictl                        | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo find                          | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo ip a s                        | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	| ssh     | -p kindnet-691590 sudo ip r s                        | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	| ssh     | -p kindnet-691590 sudo                               | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-691590 sudo                               | kindnet-691590        | jenkins | v1.34.0 | 14 Sep 24 18:31 UTC | 14 Sep 24 18:31 UTC |
	|         | iptables -t nat -L -n -v                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:31:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:31:26.280857   73571 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:31:26.281006   73571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:31:26.281016   73571 out.go:358] Setting ErrFile to fd 2...
	I0914 18:31:26.281022   73571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:31:26.281248   73571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:31:26.281952   73571 out.go:352] Setting JSON to false
	I0914 18:31:26.283159   73571 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8030,"bootTime":1726330656,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:31:26.283272   73571 start.go:139] virtualization: kvm guest
	I0914 18:31:26.285119   73571 out.go:177] * [custom-flannel-691590] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:31:26.286544   73571 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:31:26.286544   73571 notify.go:220] Checking for updates...
	I0914 18:31:26.289306   73571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:31:26.290986   73571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:31:26.292496   73571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:31:26.293888   73571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:31:26.295238   73571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:31:26.297014   73571 config.go:182] Loaded profile config "calico-691590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:31:26.297177   73571 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:31:26.297280   73571 config.go:182] Loaded profile config "kindnet-691590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:31:26.297359   73571 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:31:26.337421   73571 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 18:31:26.338722   73571 start.go:297] selected driver: kvm2
	I0914 18:31:26.338742   73571 start.go:901] validating driver "kvm2" against <nil>
	I0914 18:31:26.338753   73571 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:31:26.339473   73571 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:31:26.339556   73571 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:31:26.355782   73571 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:31:26.355828   73571 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 18:31:26.356120   73571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:31:26.356173   73571 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0914 18:31:26.356191   73571 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0914 18:31:26.356279   73571 start.go:340] cluster config:
	{Name:custom-flannel-691590 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-691590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:31:26.356436   73571 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:31:26.358379   73571 out.go:177] * Starting "custom-flannel-691590" primary control-plane node in "custom-flannel-691590" cluster
	I0914 18:31:26.359736   73571 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:31:26.359785   73571 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:31:26.359800   73571 cache.go:56] Caching tarball of preloaded images
	I0914 18:31:26.359905   73571 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:31:26.359918   73571 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:31:26.360026   73571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/custom-flannel-691590/config.json ...
	I0914 18:31:26.360052   73571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/custom-flannel-691590/config.json: {Name:mk0bf1b31da692852b766c50218cdc7306db8650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:31:26.360240   73571 start.go:360] acquireMachinesLock for custom-flannel-691590: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:31:26.360280   73571 start.go:364] duration metric: took 22.949µs to acquireMachinesLock for "custom-flannel-691590"
	I0914 18:31:26.360301   73571 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-691590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-691590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:31:26.360396   73571 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 18:31:27.452449   71786 kubeadm.go:310] [api-check] The API server is healthy after 6.002932432s
	I0914 18:31:27.466033   71786 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:31:27.488605   71786 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:31:27.527908   71786 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:31:27.528195   71786 kubeadm.go:310] [mark-control-plane] Marking the node calico-691590 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:31:27.543137   71786 kubeadm.go:310] [bootstrap-token] Using token: 5re6ci.0txvhpgdiyg3wbwu
	I0914 18:31:27.544722   71786 out.go:235]   - Configuring RBAC rules ...
	I0914 18:31:27.544903   71786 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:31:27.555333   71786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:31:27.565155   71786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:31:27.571307   71786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:31:27.577063   71786 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:31:27.586628   71786 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:31:27.863028   71786 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:31:28.322330   71786 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:31:28.864082   71786 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:31:28.864110   71786 kubeadm.go:310] 
	I0914 18:31:28.864186   71786 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:31:28.864193   71786 kubeadm.go:310] 
	I0914 18:31:28.864328   71786 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:31:28.864348   71786 kubeadm.go:310] 
	I0914 18:31:28.864371   71786 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:31:28.864422   71786 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:31:28.864467   71786 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:31:28.864474   71786 kubeadm.go:310] 
	I0914 18:31:28.864519   71786 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:31:28.864526   71786 kubeadm.go:310] 
	I0914 18:31:28.864566   71786 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:31:28.864575   71786 kubeadm.go:310] 
	I0914 18:31:28.864675   71786 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:31:28.864774   71786 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:31:28.864858   71786 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:31:28.864867   71786 kubeadm.go:310] 
	I0914 18:31:28.864964   71786 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:31:28.865062   71786 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:31:28.865071   71786 kubeadm.go:310] 
	I0914 18:31:28.865180   71786 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5re6ci.0txvhpgdiyg3wbwu \
	I0914 18:31:28.865321   71786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:31:28.865362   71786 kubeadm.go:310] 	--control-plane 
	I0914 18:31:28.865371   71786 kubeadm.go:310] 
	I0914 18:31:28.865485   71786 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:31:28.865493   71786 kubeadm.go:310] 
	I0914 18:31:28.865604   71786 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5re6ci.0txvhpgdiyg3wbwu \
	I0914 18:31:28.865735   71786 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:31:28.866463   71786 kubeadm.go:310] W0914 18:31:17.040927     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:31:28.866877   71786 kubeadm.go:310] W0914 18:31:17.041884     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:31:28.867044   71786 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:31:28.867072   71786 cni.go:84] Creating CNI manager for "calico"
	I0914 18:31:28.868930   71786 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0914 18:31:28.871208   71786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 18:31:28.871229   71786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0914 18:31:28.892797   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 18:31:26.361909   73571 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0914 18:31:26.362061   73571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:31:26.362112   73571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:31:26.379000   73571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0914 18:31:26.379603   73571 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:31:26.380270   73571 main.go:141] libmachine: Using API Version  1
	I0914 18:31:26.380287   73571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:31:26.380692   73571 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:31:26.380928   73571 main.go:141] libmachine: (custom-flannel-691590) Calling .GetMachineName
	I0914 18:31:26.381136   73571 main.go:141] libmachine: (custom-flannel-691590) Calling .DriverName
	I0914 18:31:26.381504   73571 start.go:159] libmachine.API.Create for "custom-flannel-691590" (driver="kvm2")
	I0914 18:31:26.381535   73571 client.go:168] LocalClient.Create starting
	I0914 18:31:26.381571   73571 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 18:31:26.381606   73571 main.go:141] libmachine: Decoding PEM data...
	I0914 18:31:26.381623   73571 main.go:141] libmachine: Parsing certificate...
	I0914 18:31:26.381677   73571 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 18:31:26.381704   73571 main.go:141] libmachine: Decoding PEM data...
	I0914 18:31:26.381722   73571 main.go:141] libmachine: Parsing certificate...
	I0914 18:31:26.381746   73571 main.go:141] libmachine: Running pre-create checks...
	I0914 18:31:26.381760   73571 main.go:141] libmachine: (custom-flannel-691590) Calling .PreCreateCheck
	I0914 18:31:26.382183   73571 main.go:141] libmachine: (custom-flannel-691590) Calling .GetConfigRaw
	I0914 18:31:26.382632   73571 main.go:141] libmachine: Creating machine...
	I0914 18:31:26.382645   73571 main.go:141] libmachine: (custom-flannel-691590) Calling .Create
	I0914 18:31:26.382829   73571 main.go:141] libmachine: (custom-flannel-691590) Creating KVM machine...
	I0914 18:31:26.384477   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | found existing default KVM network
	I0914 18:31:26.386176   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:26.385972   73594 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001297f0}
	I0914 18:31:26.386208   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | created network xml: 
	I0914 18:31:26.386222   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | <network>
	I0914 18:31:26.386232   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |   <name>mk-custom-flannel-691590</name>
	I0914 18:31:26.386241   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |   <dns enable='no'/>
	I0914 18:31:26.386249   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |   
	I0914 18:31:26.386271   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0914 18:31:26.386278   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |     <dhcp>
	I0914 18:31:26.386287   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0914 18:31:26.386294   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |     </dhcp>
	I0914 18:31:26.386303   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |   </ip>
	I0914 18:31:26.386309   73571 main.go:141] libmachine: (custom-flannel-691590) DBG |   
	I0914 18:31:26.386321   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | </network>
	I0914 18:31:26.386341   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | 
	I0914 18:31:26.391802   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | trying to create private KVM network mk-custom-flannel-691590 192.168.39.0/24...
	I0914 18:31:26.481248   73571 main.go:141] libmachine: (custom-flannel-691590) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590 ...
	I0914 18:31:26.481303   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | private KVM network mk-custom-flannel-691590 192.168.39.0/24 created
	I0914 18:31:26.481316   73571 main.go:141] libmachine: (custom-flannel-691590) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 18:31:26.481337   73571 main.go:141] libmachine: (custom-flannel-691590) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 18:31:26.481364   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:26.481170   73594 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:31:26.745281   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:26.745169   73594 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590/id_rsa...
	I0914 18:31:26.911416   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:26.911292   73594 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590/custom-flannel-691590.rawdisk...
	I0914 18:31:26.911445   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Writing magic tar header
	I0914 18:31:26.911462   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Writing SSH key tar header
	I0914 18:31:26.911478   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:26.911417   73594 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590 ...
	I0914 18:31:26.911537   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590
	I0914 18:31:26.911575   73571 main.go:141] libmachine: (custom-flannel-691590) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590 (perms=drwx------)
	I0914 18:31:26.911589   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 18:31:26.911605   73571 main.go:141] libmachine: (custom-flannel-691590) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 18:31:26.911623   73571 main.go:141] libmachine: (custom-flannel-691590) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 18:31:26.911631   73571 main.go:141] libmachine: (custom-flannel-691590) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 18:31:26.911663   73571 main.go:141] libmachine: (custom-flannel-691590) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 18:31:26.911684   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:31:26.911704   73571 main.go:141] libmachine: (custom-flannel-691590) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 18:31:26.911719   73571 main.go:141] libmachine: (custom-flannel-691590) Creating domain...
	I0914 18:31:26.911739   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 18:31:26.911755   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 18:31:26.911774   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home/jenkins
	I0914 18:31:26.911793   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Checking permissions on dir: /home
	I0914 18:31:26.911805   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | Skipping /home - not owner
	I0914 18:31:26.912987   73571 main.go:141] libmachine: (custom-flannel-691590) define libvirt domain using xml: 
	I0914 18:31:26.913037   73571 main.go:141] libmachine: (custom-flannel-691590) <domain type='kvm'>
	I0914 18:31:26.913051   73571 main.go:141] libmachine: (custom-flannel-691590)   <name>custom-flannel-691590</name>
	I0914 18:31:26.913072   73571 main.go:141] libmachine: (custom-flannel-691590)   <memory unit='MiB'>3072</memory>
	I0914 18:31:26.913086   73571 main.go:141] libmachine: (custom-flannel-691590)   <vcpu>2</vcpu>
	I0914 18:31:26.913097   73571 main.go:141] libmachine: (custom-flannel-691590)   <features>
	I0914 18:31:26.913109   73571 main.go:141] libmachine: (custom-flannel-691590)     <acpi/>
	I0914 18:31:26.913125   73571 main.go:141] libmachine: (custom-flannel-691590)     <apic/>
	I0914 18:31:26.913137   73571 main.go:141] libmachine: (custom-flannel-691590)     <pae/>
	I0914 18:31:26.913159   73571 main.go:141] libmachine: (custom-flannel-691590)     
	I0914 18:31:26.913173   73571 main.go:141] libmachine: (custom-flannel-691590)   </features>
	I0914 18:31:26.913185   73571 main.go:141] libmachine: (custom-flannel-691590)   <cpu mode='host-passthrough'>
	I0914 18:31:26.913209   73571 main.go:141] libmachine: (custom-flannel-691590)   
	I0914 18:31:26.913228   73571 main.go:141] libmachine: (custom-flannel-691590)   </cpu>
	I0914 18:31:26.913241   73571 main.go:141] libmachine: (custom-flannel-691590)   <os>
	I0914 18:31:26.913251   73571 main.go:141] libmachine: (custom-flannel-691590)     <type>hvm</type>
	I0914 18:31:26.913259   73571 main.go:141] libmachine: (custom-flannel-691590)     <boot dev='cdrom'/>
	I0914 18:31:26.913269   73571 main.go:141] libmachine: (custom-flannel-691590)     <boot dev='hd'/>
	I0914 18:31:26.913282   73571 main.go:141] libmachine: (custom-flannel-691590)     <bootmenu enable='no'/>
	I0914 18:31:26.913296   73571 main.go:141] libmachine: (custom-flannel-691590)   </os>
	I0914 18:31:26.913308   73571 main.go:141] libmachine: (custom-flannel-691590)   <devices>
	I0914 18:31:26.913319   73571 main.go:141] libmachine: (custom-flannel-691590)     <disk type='file' device='cdrom'>
	I0914 18:31:26.913330   73571 main.go:141] libmachine: (custom-flannel-691590)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590/boot2docker.iso'/>
	I0914 18:31:26.913339   73571 main.go:141] libmachine: (custom-flannel-691590)       <target dev='hdc' bus='scsi'/>
	I0914 18:31:26.913359   73571 main.go:141] libmachine: (custom-flannel-691590)       <readonly/>
	I0914 18:31:26.913369   73571 main.go:141] libmachine: (custom-flannel-691590)     </disk>
	I0914 18:31:26.913379   73571 main.go:141] libmachine: (custom-flannel-691590)     <disk type='file' device='disk'>
	I0914 18:31:26.913390   73571 main.go:141] libmachine: (custom-flannel-691590)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 18:31:26.913433   73571 main.go:141] libmachine: (custom-flannel-691590)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/custom-flannel-691590/custom-flannel-691590.rawdisk'/>
	I0914 18:31:26.913459   73571 main.go:141] libmachine: (custom-flannel-691590)       <target dev='hda' bus='virtio'/>
	I0914 18:31:26.913469   73571 main.go:141] libmachine: (custom-flannel-691590)     </disk>
	I0914 18:31:26.913480   73571 main.go:141] libmachine: (custom-flannel-691590)     <interface type='network'>
	I0914 18:31:26.913494   73571 main.go:141] libmachine: (custom-flannel-691590)       <source network='mk-custom-flannel-691590'/>
	I0914 18:31:26.913504   73571 main.go:141] libmachine: (custom-flannel-691590)       <model type='virtio'/>
	I0914 18:31:26.913515   73571 main.go:141] libmachine: (custom-flannel-691590)     </interface>
	I0914 18:31:26.913526   73571 main.go:141] libmachine: (custom-flannel-691590)     <interface type='network'>
	I0914 18:31:26.913538   73571 main.go:141] libmachine: (custom-flannel-691590)       <source network='default'/>
	I0914 18:31:26.913548   73571 main.go:141] libmachine: (custom-flannel-691590)       <model type='virtio'/>
	I0914 18:31:26.913556   73571 main.go:141] libmachine: (custom-flannel-691590)     </interface>
	I0914 18:31:26.913566   73571 main.go:141] libmachine: (custom-flannel-691590)     <serial type='pty'>
	I0914 18:31:26.913579   73571 main.go:141] libmachine: (custom-flannel-691590)       <target port='0'/>
	I0914 18:31:26.913592   73571 main.go:141] libmachine: (custom-flannel-691590)     </serial>
	I0914 18:31:26.913601   73571 main.go:141] libmachine: (custom-flannel-691590)     <console type='pty'>
	I0914 18:31:26.913608   73571 main.go:141] libmachine: (custom-flannel-691590)       <target type='serial' port='0'/>
	I0914 18:31:26.913620   73571 main.go:141] libmachine: (custom-flannel-691590)     </console>
	I0914 18:31:26.913627   73571 main.go:141] libmachine: (custom-flannel-691590)     <rng model='virtio'>
	I0914 18:31:26.913639   73571 main.go:141] libmachine: (custom-flannel-691590)       <backend model='random'>/dev/random</backend>
	I0914 18:31:26.913648   73571 main.go:141] libmachine: (custom-flannel-691590)     </rng>
	I0914 18:31:26.913655   73571 main.go:141] libmachine: (custom-flannel-691590)     
	I0914 18:31:26.913664   73571 main.go:141] libmachine: (custom-flannel-691590)     
	I0914 18:31:26.913672   73571 main.go:141] libmachine: (custom-flannel-691590)   </devices>
	I0914 18:31:26.913678   73571 main.go:141] libmachine: (custom-flannel-691590) </domain>
	I0914 18:31:26.913688   73571 main.go:141] libmachine: (custom-flannel-691590) 
	I0914 18:31:26.917458   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:96:86:44 in network default
	I0914 18:31:26.918001   73571 main.go:141] libmachine: (custom-flannel-691590) Ensuring networks are active...
	I0914 18:31:26.918031   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:26.918812   73571 main.go:141] libmachine: (custom-flannel-691590) Ensuring network default is active
	I0914 18:31:26.919104   73571 main.go:141] libmachine: (custom-flannel-691590) Ensuring network mk-custom-flannel-691590 is active
	I0914 18:31:26.919737   73571 main.go:141] libmachine: (custom-flannel-691590) Getting domain xml...
	I0914 18:31:26.920572   73571 main.go:141] libmachine: (custom-flannel-691590) Creating domain...
	I0914 18:31:28.329222   73571 main.go:141] libmachine: (custom-flannel-691590) Waiting to get IP...
	I0914 18:31:28.330251   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:28.330748   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:28.330772   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:28.330720   73594 retry.go:31] will retry after 288.534501ms: waiting for machine to come up
	I0914 18:31:28.621418   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:28.621996   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:28.622027   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:28.621943   73594 retry.go:31] will retry after 302.844052ms: waiting for machine to come up
	I0914 18:31:28.926560   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:28.927061   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:28.927089   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:28.927022   73594 retry.go:31] will retry after 371.228296ms: waiting for machine to come up
	I0914 18:31:29.299365   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:29.299998   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:29.300022   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:29.299959   73594 retry.go:31] will retry after 367.010419ms: waiting for machine to come up
	I0914 18:31:29.668370   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:29.668957   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:29.668977   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:29.668906   73594 retry.go:31] will retry after 692.82773ms: waiting for machine to come up
	I0914 18:31:30.363993   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:30.364385   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:30.364412   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:30.364339   73594 retry.go:31] will retry after 781.738491ms: waiting for machine to come up
	I0914 18:31:31.147983   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:31.148366   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:31.148415   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:31.148319   73594 retry.go:31] will retry after 1.029181669s: waiting for machine to come up
	I0914 18:31:30.562865   71786 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.670025935s)
	I0914 18:31:30.562928   71786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:31:30.563009   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:30.563051   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-691590 minikube.k8s.io/updated_at=2024_09_14T18_31_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=calico-691590 minikube.k8s.io/primary=true
	I0914 18:31:30.726700   71786 ops.go:34] apiserver oom_adj: -16
	I0914 18:31:30.726737   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:31.227361   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:31.727811   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:32.227398   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:32.727394   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:33.227331   71786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:31:33.344584   71786 kubeadm.go:1113] duration metric: took 2.781626576s to wait for elevateKubeSystemPrivileges
	I0914 18:31:33.344613   71786 kubeadm.go:394] duration metric: took 16.51599322s to StartCluster
	I0914 18:31:33.344630   71786 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:31:33.344694   71786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:31:33.345832   71786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:31:33.346104   71786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 18:31:33.346110   71786 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:31:33.346199   71786 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:31:33.346300   71786 addons.go:69] Setting storage-provisioner=true in profile "calico-691590"
	I0914 18:31:33.346320   71786 addons.go:234] Setting addon storage-provisioner=true in "calico-691590"
	I0914 18:31:33.346323   71786 config.go:182] Loaded profile config "calico-691590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:31:33.346349   71786 host.go:66] Checking if "calico-691590" exists ...
	I0914 18:31:33.346386   71786 addons.go:69] Setting default-storageclass=true in profile "calico-691590"
	I0914 18:31:33.346404   71786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-691590"
	I0914 18:31:33.346876   71786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:31:33.346912   71786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:31:33.346923   71786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:31:33.346964   71786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:31:33.347898   71786 out.go:177] * Verifying Kubernetes components...
	I0914 18:31:33.349463   71786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:31:33.363022   71786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0914 18:31:33.363053   71786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0914 18:31:33.363493   71786 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:31:33.363505   71786 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:31:33.364079   71786 main.go:141] libmachine: Using API Version  1
	I0914 18:31:33.364094   71786 main.go:141] libmachine: Using API Version  1
	I0914 18:31:33.364096   71786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:31:33.364111   71786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:31:33.364479   71786 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:31:33.364480   71786 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:31:33.364659   71786 main.go:141] libmachine: (calico-691590) Calling .GetState
	I0914 18:31:33.365056   71786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:31:33.365101   71786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:31:33.368337   71786 addons.go:234] Setting addon default-storageclass=true in "calico-691590"
	I0914 18:31:33.368383   71786 host.go:66] Checking if "calico-691590" exists ...
	I0914 18:31:33.368780   71786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:31:33.368823   71786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:31:33.385343   71786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0914 18:31:33.386466   71786 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:31:33.387121   71786 main.go:141] libmachine: Using API Version  1
	I0914 18:31:33.387145   71786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:31:33.387224   71786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
	I0914 18:31:33.387542   71786 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:31:33.388027   71786 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:31:33.388174   71786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:31:33.388216   71786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:31:33.388715   71786 main.go:141] libmachine: Using API Version  1
	I0914 18:31:33.388732   71786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:31:33.389032   71786 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:31:33.389265   71786 main.go:141] libmachine: (calico-691590) Calling .GetState
	I0914 18:31:33.390996   71786 main.go:141] libmachine: (calico-691590) Calling .DriverName
	I0914 18:31:33.392989   71786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:31:33.394411   71786 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:31:33.394429   71786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:31:33.394445   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHHostname
	I0914 18:31:33.397657   71786 main.go:141] libmachine: (calico-691590) DBG | domain calico-691590 has defined MAC address 52:54:00:5a:03:e9 in network mk-calico-691590
	I0914 18:31:33.398078   71786 main.go:141] libmachine: (calico-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:03:e9", ip: ""} in network mk-calico-691590: {Iface:virbr1 ExpiryTime:2024-09-14 19:31:00 +0000 UTC Type:0 Mac:52:54:00:5a:03:e9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:calico-691590 Clientid:01:52:54:00:5a:03:e9}
	I0914 18:31:33.398110   71786 main.go:141] libmachine: (calico-691590) DBG | domain calico-691590 has defined IP address 192.168.72.228 and MAC address 52:54:00:5a:03:e9 in network mk-calico-691590
	I0914 18:31:33.398433   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHPort
	I0914 18:31:33.398617   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHKeyPath
	I0914 18:31:33.398811   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHUsername
	I0914 18:31:33.398921   71786 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/calico-691590/id_rsa Username:docker}
	I0914 18:31:33.407116   71786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0914 18:31:33.407460   71786 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:31:33.408040   71786 main.go:141] libmachine: Using API Version  1
	I0914 18:31:33.408057   71786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:31:33.408415   71786 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:31:33.408584   71786 main.go:141] libmachine: (calico-691590) Calling .GetState
	I0914 18:31:33.410004   71786 main.go:141] libmachine: (calico-691590) Calling .DriverName
	I0914 18:31:33.410239   71786 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:31:33.410255   71786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:31:33.410268   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHHostname
	I0914 18:31:33.412677   71786 main.go:141] libmachine: (calico-691590) DBG | domain calico-691590 has defined MAC address 52:54:00:5a:03:e9 in network mk-calico-691590
	I0914 18:31:33.413049   71786 main.go:141] libmachine: (calico-691590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:03:e9", ip: ""} in network mk-calico-691590: {Iface:virbr1 ExpiryTime:2024-09-14 19:31:00 +0000 UTC Type:0 Mac:52:54:00:5a:03:e9 Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:calico-691590 Clientid:01:52:54:00:5a:03:e9}
	I0914 18:31:33.413071   71786 main.go:141] libmachine: (calico-691590) DBG | domain calico-691590 has defined IP address 192.168.72.228 and MAC address 52:54:00:5a:03:e9 in network mk-calico-691590
	I0914 18:31:33.413310   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHPort
	I0914 18:31:33.413475   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHKeyPath
	I0914 18:31:33.413601   71786 main.go:141] libmachine: (calico-691590) Calling .GetSSHUsername
	I0914 18:31:33.413710   71786 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/calico-691590/id_rsa Username:docker}
	I0914 18:31:33.560923   71786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 18:31:33.562565   71786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:31:33.709670   71786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:31:33.746290   71786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:31:33.863336   71786 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0914 18:31:33.864845   71786 node_ready.go:35] waiting up to 15m0s for node "calico-691590" to be "Ready" ...
	I0914 18:31:34.099839   71786 main.go:141] libmachine: Making call to close driver server
	I0914 18:31:34.099876   71786 main.go:141] libmachine: (calico-691590) Calling .Close
	I0914 18:31:34.100306   71786 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:31:34.100314   71786 main.go:141] libmachine: (calico-691590) DBG | Closing plugin on server side
	I0914 18:31:34.100327   71786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:31:34.100358   71786 main.go:141] libmachine: Making call to close driver server
	I0914 18:31:34.100377   71786 main.go:141] libmachine: (calico-691590) Calling .Close
	I0914 18:31:34.100632   71786 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:31:34.100648   71786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:31:34.117716   71786 main.go:141] libmachine: Making call to close driver server
	I0914 18:31:34.117748   71786 main.go:141] libmachine: (calico-691590) Calling .Close
	I0914 18:31:34.119823   71786 main.go:141] libmachine: (calico-691590) DBG | Closing plugin on server side
	I0914 18:31:34.119830   71786 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:31:34.119859   71786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:31:34.372902   71786 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-691590" context rescaled to 1 replicas
	I0914 18:31:34.388561   71786 main.go:141] libmachine: Making call to close driver server
	I0914 18:31:34.388595   71786 main.go:141] libmachine: (calico-691590) Calling .Close
	I0914 18:31:34.388920   71786 main.go:141] libmachine: (calico-691590) DBG | Closing plugin on server side
	I0914 18:31:34.388961   71786 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:31:34.388982   71786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:31:34.388992   71786 main.go:141] libmachine: Making call to close driver server
	I0914 18:31:34.389001   71786 main.go:141] libmachine: (calico-691590) Calling .Close
	I0914 18:31:34.390918   71786 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:31:34.390941   71786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:31:34.390921   71786 main.go:141] libmachine: (calico-691590) DBG | Closing plugin on server side
	I0914 18:31:34.392841   71786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0914 18:31:32.179162   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:32.179620   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:32.179652   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:32.179558   73594 retry.go:31] will retry after 917.362844ms: waiting for machine to come up
	I0914 18:31:33.098040   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:33.098482   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:33.098509   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:33.098448   73594 retry.go:31] will retry after 1.606129807s: waiting for machine to come up
	I0914 18:31:34.706066   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:34.706605   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:34.706633   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:34.706575   73594 retry.go:31] will retry after 1.769161231s: waiting for machine to come up
	I0914 18:31:34.394278   71786 addons.go:510] duration metric: took 1.048085067s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0914 18:31:35.869911   71786 node_ready.go:53] node "calico-691590" has status "Ready":"False"
	I0914 18:31:38.368678   71786 node_ready.go:53] node "calico-691590" has status "Ready":"False"
	I0914 18:31:36.477451   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:36.477984   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:36.478005   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:36.477952   73594 retry.go:31] will retry after 2.397001352s: waiting for machine to come up
	I0914 18:31:38.876354   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:38.876864   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:38.876892   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:38.876820   73594 retry.go:31] will retry after 2.221940246s: waiting for machine to come up
	I0914 18:31:41.100342   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:41.100829   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:41.100847   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:41.100797   73594 retry.go:31] will retry after 4.170187497s: waiting for machine to come up
	I0914 18:31:40.868850   71786 node_ready.go:53] node "calico-691590" has status "Ready":"False"
	I0914 18:31:42.870484   71786 node_ready.go:49] node "calico-691590" has status "Ready":"True"
	I0914 18:31:42.870515   71786 node_ready.go:38] duration metric: took 9.005641202s for node "calico-691590" to be "Ready" ...
	I0914 18:31:42.870528   71786 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:31:42.878535   71786 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-7fbd86d5c5-dltnl" in "kube-system" namespace to be "Ready" ...
	I0914 18:31:45.275293   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | domain custom-flannel-691590 has defined MAC address 52:54:00:ee:ff:f6 in network mk-custom-flannel-691590
	I0914 18:31:45.275800   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | unable to find current IP address of domain custom-flannel-691590 in network mk-custom-flannel-691590
	I0914 18:31:45.275850   73571 main.go:141] libmachine: (custom-flannel-691590) DBG | I0914 18:31:45.275773   73594 retry.go:31] will retry after 3.558624643s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.526476529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338709526445282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23e355a3-c83d-47c1-a877-65082f6e3d19 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.527172888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1038a55-e449-4dcb-a50b-527eb864d00c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.527255165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1038a55-e449-4dcb-a50b-527eb864d00c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.527624961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1038a55-e449-4dcb-a50b-527eb864d00c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.578459470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02c0b7b6-bb49-4542-b7fb-679e42f2f0fb name=/runtime.v1.RuntimeService/Version
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.578553338Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02c0b7b6-bb49-4542-b7fb-679e42f2f0fb name=/runtime.v1.RuntimeService/Version
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.581122116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49b7dab5-26d9-4d82-8bb3-ca74977c9262 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.581631644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338709581602298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49b7dab5-26d9-4d82-8bb3-ca74977c9262 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.582280601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af5ba83b-ab13-45ce-9548-4658253a958c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.582442918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af5ba83b-ab13-45ce-9548-4658253a958c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.582729117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af5ba83b-ab13-45ce-9548-4658253a958c name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.636242923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a2a3cf0-2481-4f0b-879a-ec40c14779ba name=/runtime.v1.RuntimeService/Version
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.636407131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a2a3cf0-2481-4f0b-879a-ec40c14779ba name=/runtime.v1.RuntimeService/Version
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.638528893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc79ce46-f44e-4bde-918a-86e8e6422a1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.639668640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338709639629472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc79ce46-f44e-4bde-918a-86e8e6422a1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.643001580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d628b8de-a8b4-4dae-9a3d-950c436b53d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.643097944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d628b8de-a8b4-4dae-9a3d-950c436b53d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.643398310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d628b8de-a8b4-4dae-9a3d-950c436b53d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.680660585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fe72917-748e-42d5-b122-78a792e4b493 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.680740885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fe72917-748e-42d5-b122-78a792e4b493 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.682108924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9809ec5-0cc5-4847-a39e-1e4c19ea14b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.682868035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338709682841002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9809ec5-0cc5-4847-a39e-1e4c19ea14b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.683508015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8213370b-7f3b-46cc-8dc6-23bfc91a514e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.683582444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8213370b-7f3b-46cc-8dc6-23bfc91a514e name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:31:49 default-k8s-diff-port-243449 crio[695]: time="2024-09-14 18:31:49.683761700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337388273731160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38bdc2036d51c8e1266f8ec9d67b896ac39334fc9230073cc9692b6d4cc4ba8,PodSandboxId:5f11f2a59686989219bea6d342aaa6b2066beaaa8b9fb1012ef3accf0321c763,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726337368646149694,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86,PodSandboxId:65ce16275efd0d0f66d68f53237ee609f6658ca06ec7819baf35dd81d6aa6f8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337365153990856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8v8s7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896b4fde-d17e-43a3-b7c8-b710e2e70e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e,PodSandboxId:e97ff06204d25546f83e01f0e32e0515cd39aacd8dfb182c7333fcb548a8dc63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726337357428003930,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e814601-a19a-4848-bed5-d9a29ffb3b5d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d,PodSandboxId:eafcf1e3a206737a4857e2484820954af00cac8e773a41f582b3a0947901d38d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726337357456834716,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gbkqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4308aacf-ea0a-4bba-8598
-85ffaf959b7e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377,PodSandboxId:24d93e1abe22063f7589090dc366060f160ca6781207e1f464897e6cc966085d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337353697170593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de6eba14fda99aaa4a144ae5e6d52ec,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4,PodSandboxId:662103c157493ef87ee240553f659322bef8401e12abbe9b1c5dc044a5a79696,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337353689069860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c181fee58e194ba1e69efe4c4fb4841,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94,PodSandboxId:2d16ffab3061ac3b2945eb2607e6a8cec9877fab622b4a7a2da444779c004106,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337353703011191,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e467e9fb657a0ca4b355d6e3b1e3
2a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b,PodSandboxId:f49c613c905737464d0e7690cb4171b24b253b5b11431a7908323c5b0e0f3a9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337353661910592,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-243449,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5688fa5732dad3a9738f9b149e2c0
5f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8213370b-7f3b-46cc-8dc6-23bfc91a514e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be0aa9c176141       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   e97ff06204d25       storage-provisioner
	d38bdc2036d51       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   5f11f2a596869       busybox
	02a31bf75666c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      22 minutes ago      Running             coredns                   1                   65ce16275efd0       coredns-7c65d6cfc9-8v8s7
	a5c3b65e96ba8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago      Running             kube-proxy                1                   eafcf1e3a2067       kube-proxy-gbkqm
	b33f92ef722c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   e97ff06204d25       storage-provisioner
	09627c963da76       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      22 minutes ago      Running             kube-controller-manager   1                   2d16ffab3061a       kube-controller-manager-default-k8s-diff-port-243449
	7fb6567a7b9f3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   24d93e1abe220       etcd-default-k8s-diff-port-243449
	6c532e45713d0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      22 minutes ago      Running             kube-apiserver            1                   662103c157493       kube-apiserver-default-k8s-diff-port-243449
	a390e6c015355       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      22 minutes ago      Running             kube-scheduler            1                   f49c613c90573       kube-scheduler-default-k8s-diff-port-243449
	
	
	==> coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49398 - 34282 "HINFO IN 2491328004879093116.776769339588687849. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017624764s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-243449
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-243449
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=default-k8s-diff-port-243449
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_03_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:03:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-243449
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:30:09 +0000   Sat, 14 Sep 2024 18:03:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:30:09 +0000   Sat, 14 Sep 2024 18:03:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:30:09 +0000   Sat, 14 Sep 2024 18:03:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:30:09 +0000   Sat, 14 Sep 2024 18:09:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.38
	  Hostname:    default-k8s-diff-port-243449
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd101a054f1f4ca78ef4db25ca66f4da
	  System UUID:                fd101a05-4f1f-4ca7-8ef4-db25ca66f4da
	  Boot ID:                    12942388-bced-4bfe-8a04-b38a566e7b58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-7c65d6cfc9-8v8s7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-243449                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-243449             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-243449    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-gbkqm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-243449             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-7v8dr                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-243449 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-243449 event: Registered Node default-k8s-diff-port-243449 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-243449 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-243449 event: Registered Node default-k8s-diff-port-243449 in Controller
	
	
	==> dmesg <==
	[Sep14 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055344] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044178] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.979298] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.015370] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.349940] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep14 18:09] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.135128] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.181018] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.132870] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.302572] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +4.117874] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +2.013271] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.068264] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.536559] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.405537] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +1.369927] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.557021] kauditd_printk_skb: 44 callbacks suppressed
	[Sep14 18:30] hrtimer: interrupt took 3681954 ns
	
	
	==> etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] <==
	{"level":"info","ts":"2024-09-14T18:09:15.178139Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T18:19:15.216882Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":853}
	{"level":"info","ts":"2024-09-14T18:19:15.226311Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":853,"took":"8.850309ms","hash":1517727158,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-09-14T18:19:15.226463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1517727158,"revision":853,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T18:24:15.222777Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1095}
	{"level":"info","ts":"2024-09-14T18:24:15.226478Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1095,"took":"3.423605ms","hash":1637320494,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-14T18:24:15.226531Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1637320494,"revision":1095,"compact-revision":853}
	{"level":"info","ts":"2024-09-14T18:29:15.231577Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1339}
	{"level":"info","ts":"2024-09-14T18:29:15.235129Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1339,"took":"3.22761ms","hash":2318676823,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1662976,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-14T18:29:15.235187Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2318676823,"revision":1339,"compact-revision":1095}
	{"level":"info","ts":"2024-09-14T18:29:23.969964Z","caller":"traceutil/trace.go:171","msg":"trace[1405511841] transaction","detail":"{read_only:false; response_revision:1589; number_of_response:1; }","duration":"413.739815ms","start":"2024-09-14T18:29:23.556193Z","end":"2024-09-14T18:29:23.969933Z","steps":["trace[1405511841] 'process raft request'  (duration: 413.603753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:29:23.970694Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T18:29:23.556177Z","time spent":"413.854103ms","remote":"127.0.0.1:46370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1588 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-14T18:30:04.539819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.624659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:30:04.539935Z","caller":"traceutil/trace.go:171","msg":"trace[973750389] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1622; }","duration":"149.771231ms","start":"2024-09-14T18:30:04.390148Z","end":"2024-09-14T18:30:04.539919Z","steps":["trace[973750389] 'range keys from in-memory index tree'  (duration: 149.566241ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:30:25.899137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"394.77474ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14601393409274535433 > lease_revoke:<id:4aa291f1b989dda8>","response":"size:29"}
	{"level":"info","ts":"2024-09-14T18:30:25.899372Z","caller":"traceutil/trace.go:171","msg":"trace[924155060] linearizableReadLoop","detail":"{readStateIndex:1933; appliedIndex:1932; }","duration":"617.812049ms","start":"2024-09-14T18:30:25.281512Z","end":"2024-09-14T18:30:25.899324Z","steps":["trace[924155060] 'read index received'  (duration: 222.598338ms)","trace[924155060] 'applied index is now lower than readState.Index'  (duration: 395.212754ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T18:30:25.899803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"618.283502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:30:25.900012Z","caller":"traceutil/trace.go:171","msg":"trace[370440541] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1639; }","duration":"618.498296ms","start":"2024-09-14T18:30:25.281504Z","end":"2024-09-14T18:30:25.900002Z","steps":["trace[370440541] 'agreement among raft nodes before linearized reading'  (duration: 618.260881ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:30:25.900125Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T18:30:25.281426Z","time spent":"618.685851ms","remote":"127.0.0.1:46388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-14T18:30:25.900036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"616.862504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:30:25.900257Z","caller":"traceutil/trace.go:171","msg":"trace[494726127] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1639; }","duration":"617.088208ms","start":"2024-09-14T18:30:25.283154Z","end":"2024-09-14T18:30:25.900243Z","steps":["trace[494726127] 'agreement among raft nodes before linearized reading'  (duration: 616.835799ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:30:25.900306Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T18:30:25.283120Z","time spent":"617.174439ms","remote":"127.0.0.1:46272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true "}
	{"level":"warn","ts":"2024-09-14T18:30:25.900089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"510.514179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:30:25.900539Z","caller":"traceutil/trace.go:171","msg":"trace[1262747048] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1639; }","duration":"510.958328ms","start":"2024-09-14T18:30:25.389570Z","end":"2024-09-14T18:30:25.900529Z","steps":["trace[1262747048] 'agreement among raft nodes before linearized reading'  (duration: 510.502013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:30:25.900610Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T18:30:25.389538Z","time spent":"511.061897ms","remote":"127.0.0.1:46142","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 18:31:50 up 23 min,  0 users,  load average: 0.25, 0.21, 0.18
	Linux default-k8s-diff-port-243449 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] <==
	I0914 18:27:17.476527       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:27:17.476616       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:29:16.474061       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:16.474409       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:29:17.476200       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:17.476257       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 18:29:17.476328       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:17.476413       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:29:17.477381       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:29:17.478478       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:30:17.477740       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:30:17.477887       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 18:30:17.478876       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:30:17.479031       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:30:17.479102       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:30:17.481105       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] <==
	E0914 18:26:22.251985       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:26:22.693805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:26:52.258128       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:26:52.702638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:27:22.264312       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:27:22.710539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:27:52.269987       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:27:52.717745       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:28:22.276183       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:28:22.725415       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:28:52.283756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:28:52.734012       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:29:22.289989       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:29:22.741463       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:29:52.296032       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:29:52.749518       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:30:09.829794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-243449"
	E0914 18:30:22.303539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:30:22.760177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:30:35.075940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="963.628µs"
	I0914 18:30:48.075751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.354µs"
	E0914 18:30:52.310699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:30:52.767794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:31:22.318078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:31:22.777843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 18:09:17.739938       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 18:09:17.749668       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.38"]
	E0914 18:09:17.749828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 18:09:17.781011       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 18:09:17.781041       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 18:09:17.781064       1 server_linux.go:169] "Using iptables Proxier"
	I0914 18:09:17.783295       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 18:09:17.783654       1 server.go:483] "Version info" version="v1.31.1"
	I0914 18:09:17.783703       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:09:17.785052       1 config.go:199] "Starting service config controller"
	I0914 18:09:17.785106       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 18:09:17.785168       1 config.go:105] "Starting endpoint slice config controller"
	I0914 18:09:17.785187       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 18:09:17.785776       1 config.go:328] "Starting node config controller"
	I0914 18:09:17.785918       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 18:09:17.885916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 18:09:17.885952       1 shared_informer.go:320] Caches are synced for service config
	I0914 18:09:17.885966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] <==
	I0914 18:09:14.630654       1 serving.go:386] Generated self-signed cert in-memory
	W0914 18:09:16.426610       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 18:09:16.426710       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 18:09:16.426746       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 18:09:16.426817       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 18:09:16.491673       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0914 18:09:16.491723       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:09:16.499152       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0914 18:09:16.499320       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 18:09:16.502992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:09:16.502407       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0914 18:09:16.607208       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 18:30:35 default-k8s-diff-port-243449 kubelet[902]: E0914 18:30:35.055206     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:30:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:30:42.379924     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338642379454420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:30:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:30:42.380215     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338642379454420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:30:48 default-k8s-diff-port-243449 kubelet[902]: E0914 18:30:48.053833     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:30:52 default-k8s-diff-port-243449 kubelet[902]: E0914 18:30:52.382867     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338652382093504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:30:52 default-k8s-diff-port-243449 kubelet[902]: E0914 18:30:52.383248     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338652382093504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:01 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:01.054522     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:31:02 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:02.386015     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338662385613011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:02 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:02.386080     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338662385613011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:12.072640     902 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:12.388772     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338672388181209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:12 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:12.388825     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338672388181209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:16 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:16.054623     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:31:22 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:22.390903     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338682390323257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:22 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:22.391406     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338682390323257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:31 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:31.056549     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:31:32 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:32.393702     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338692393045780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:32 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:32.393975     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338692393045780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:42.053177     902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7v8dr" podUID="90be95af-c779-4b31-b261-2c4020a34280"
	Sep 14 18:31:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:42.396736     902 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338702396232141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:31:42 default-k8s-diff-port-243449 kubelet[902]: E0914 18:31:42.396840     902 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338702396232141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] <==
	I0914 18:09:17.674762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 18:09:47.682923       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] <==
	I0914 18:09:48.388316       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:09:48.403689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:09:48.405275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:10:05.807561       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:10:05.807905       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-243449_69ac6bff-7150-461e-8193-24eb67d1af3a!
	I0914 18:10:05.810676       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66f83808-3ad1-43c7-89ed-fe5345d634d8", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-243449_69ac6bff-7150-461e-8193-24eb67d1af3a became leader
	I0914 18:10:05.910464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-243449_69ac6bff-7150-461e-8193-24eb67d1af3a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7v8dr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 describe pod metrics-server-6867b74b74-7v8dr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-243449 describe pod metrics-server-6867b74b74-7v8dr: exit status 1 (94.701924ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7v8dr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-243449 describe pod metrics-server-6867b74b74-7v8dr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (342.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-168587 -n no-preload-168587
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-14 18:29:32.199958436 +0000 UTC m=+6346.721692409
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-168587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-168587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.557µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-168587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-168587 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-168587 logs -n 25: (1.257031227s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:28 UTC | 14 Sep 24 18:28 UTC |
	| start   | -p newest-cni-019918 --memory=2200 --alsologtostderr   | newest-cni-019918            | jenkins | v1.34.0 | 14 Sep 24 18:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:28:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:28:50.707025   69780 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:28:50.707141   69780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:28:50.707150   69780 out.go:358] Setting ErrFile to fd 2...
	I0914 18:28:50.707154   69780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:28:50.707370   69780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:28:50.707939   69780 out.go:352] Setting JSON to false
	I0914 18:28:50.708910   69780 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7875,"bootTime":1726330656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:28:50.709001   69780 start.go:139] virtualization: kvm guest
	I0914 18:28:50.711569   69780 out.go:177] * [newest-cni-019918] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:28:50.712925   69780 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:28:50.712955   69780 notify.go:220] Checking for updates...
	I0914 18:28:50.715172   69780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:28:50.716297   69780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:28:50.717418   69780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:28:50.718639   69780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:28:50.719711   69780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:28:50.721347   69780 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:28:50.721449   69780 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:28:50.721573   69780 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:28:50.721649   69780 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:28:50.759109   69780 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 18:28:50.760292   69780 start.go:297] selected driver: kvm2
	I0914 18:28:50.760307   69780 start.go:901] validating driver "kvm2" against <nil>
	I0914 18:28:50.760338   69780 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:28:50.761081   69780 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:28:50.761168   69780 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:28:50.776880   69780 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:28:50.776956   69780 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0914 18:28:50.777043   69780 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0914 18:28:50.777339   69780 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0914 18:28:50.777375   69780 cni.go:84] Creating CNI manager for ""
	I0914 18:28:50.777432   69780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:28:50.777444   69780 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 18:28:50.777521   69780 start.go:340] cluster config:
	{Name:newest-cni-019918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-019918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:28:50.777666   69780 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:28:50.780091   69780 out.go:177] * Starting "newest-cni-019918" primary control-plane node in "newest-cni-019918" cluster
	I0914 18:28:50.781628   69780 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:28:50.781699   69780 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:28:50.781714   69780 cache.go:56] Caching tarball of preloaded images
	I0914 18:28:50.781829   69780 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:28:50.781843   69780 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:28:50.781975   69780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/config.json ...
	I0914 18:28:50.782000   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/config.json: {Name:mkcf2346417161e10dd0f0e29fe692827d47edd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:28:50.782207   69780 start.go:360] acquireMachinesLock for newest-cni-019918: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:28:50.782252   69780 start.go:364] duration metric: took 24.651µs to acquireMachinesLock for "newest-cni-019918"
	I0914 18:28:50.782275   69780 start.go:93] Provisioning new machine with config: &{Name:newest-cni-019918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-019918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:28:50.782375   69780 start.go:125] createHost starting for "" (driver="kvm2")
	I0914 18:28:50.784928   69780 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0914 18:28:50.785075   69780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:28:50.785112   69780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:28:50.800498   69780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0914 18:28:50.800968   69780 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:28:50.801515   69780 main.go:141] libmachine: Using API Version  1
	I0914 18:28:50.801535   69780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:28:50.801864   69780 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:28:50.802064   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetMachineName
	I0914 18:28:50.802251   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:28:50.802413   69780 start.go:159] libmachine.API.Create for "newest-cni-019918" (driver="kvm2")
	I0914 18:28:50.802437   69780 client.go:168] LocalClient.Create starting
	I0914 18:28:50.802478   69780 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem
	I0914 18:28:50.802522   69780 main.go:141] libmachine: Decoding PEM data...
	I0914 18:28:50.802542   69780 main.go:141] libmachine: Parsing certificate...
	I0914 18:28:50.802608   69780 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem
	I0914 18:28:50.802632   69780 main.go:141] libmachine: Decoding PEM data...
	I0914 18:28:50.802647   69780 main.go:141] libmachine: Parsing certificate...
	I0914 18:28:50.802677   69780 main.go:141] libmachine: Running pre-create checks...
	I0914 18:28:50.802690   69780 main.go:141] libmachine: (newest-cni-019918) Calling .PreCreateCheck
	I0914 18:28:50.803003   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetConfigRaw
	I0914 18:28:50.803428   69780 main.go:141] libmachine: Creating machine...
	I0914 18:28:50.803441   69780 main.go:141] libmachine: (newest-cni-019918) Calling .Create
	I0914 18:28:50.803577   69780 main.go:141] libmachine: (newest-cni-019918) Creating KVM machine...
	I0914 18:28:50.804901   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found existing default KVM network
	I0914 18:28:50.806107   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:50.805950   69803 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:88:0b:22} reservation:<nil>}
	I0914 18:28:50.807161   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:50.807087   69803 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:be:5c:7a} reservation:<nil>}
	I0914 18:28:50.808024   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:50.807961   69803 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f4:1d:a9} reservation:<nil>}
	I0914 18:28:50.809157   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:50.809077   69803 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039cfb0}
	I0914 18:28:50.809205   69780 main.go:141] libmachine: (newest-cni-019918) DBG | created network xml: 
	I0914 18:28:50.809224   69780 main.go:141] libmachine: (newest-cni-019918) DBG | <network>
	I0914 18:28:50.809234   69780 main.go:141] libmachine: (newest-cni-019918) DBG |   <name>mk-newest-cni-019918</name>
	I0914 18:28:50.809245   69780 main.go:141] libmachine: (newest-cni-019918) DBG |   <dns enable='no'/>
	I0914 18:28:50.809271   69780 main.go:141] libmachine: (newest-cni-019918) DBG |   
	I0914 18:28:50.809305   69780 main.go:141] libmachine: (newest-cni-019918) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0914 18:28:50.809331   69780 main.go:141] libmachine: (newest-cni-019918) DBG |     <dhcp>
	I0914 18:28:50.809350   69780 main.go:141] libmachine: (newest-cni-019918) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0914 18:28:50.809364   69780 main.go:141] libmachine: (newest-cni-019918) DBG |     </dhcp>
	I0914 18:28:50.809374   69780 main.go:141] libmachine: (newest-cni-019918) DBG |   </ip>
	I0914 18:28:50.809383   69780 main.go:141] libmachine: (newest-cni-019918) DBG |   
	I0914 18:28:50.809390   69780 main.go:141] libmachine: (newest-cni-019918) DBG | </network>
	I0914 18:28:50.809399   69780 main.go:141] libmachine: (newest-cni-019918) DBG | 
	I0914 18:28:50.815028   69780 main.go:141] libmachine: (newest-cni-019918) DBG | trying to create private KVM network mk-newest-cni-019918 192.168.72.0/24...
	I0914 18:28:50.887760   69780 main.go:141] libmachine: (newest-cni-019918) Setting up store path in /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918 ...
	I0914 18:28:50.887790   69780 main.go:141] libmachine: (newest-cni-019918) DBG | private KVM network mk-newest-cni-019918 192.168.72.0/24 created
	I0914 18:28:50.887819   69780 main.go:141] libmachine: (newest-cni-019918) Building disk image from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 18:28:50.887848   69780 main.go:141] libmachine: (newest-cni-019918) Downloading /home/jenkins/minikube-integration/19643-8806/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso...
	I0914 18:28:50.887869   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:50.887695   69803 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:28:51.137674   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:51.137551   69803 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa...
	I0914 18:28:51.231268   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:51.231147   69803 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/newest-cni-019918.rawdisk...
	I0914 18:28:51.231297   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Writing magic tar header
	I0914 18:28:51.231309   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Writing SSH key tar header
	I0914 18:28:51.231320   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:51.231289   69803 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918 ...
	I0914 18:28:51.231475   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918
	I0914 18:28:51.231505   69780 main.go:141] libmachine: (newest-cni-019918) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918 (perms=drwx------)
	I0914 18:28:51.231533   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube/machines
	I0914 18:28:51.231554   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:28:51.231568   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19643-8806
	I0914 18:28:51.231583   69780 main.go:141] libmachine: (newest-cni-019918) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube/machines (perms=drwxr-xr-x)
	I0914 18:28:51.231601   69780 main.go:141] libmachine: (newest-cni-019918) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806/.minikube (perms=drwxr-xr-x)
	I0914 18:28:51.231613   69780 main.go:141] libmachine: (newest-cni-019918) Setting executable bit set on /home/jenkins/minikube-integration/19643-8806 (perms=drwxrwxr-x)
	I0914 18:28:51.231627   69780 main.go:141] libmachine: (newest-cni-019918) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0914 18:28:51.231643   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0914 18:28:51.231655   69780 main.go:141] libmachine: (newest-cni-019918) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0914 18:28:51.231668   69780 main.go:141] libmachine: (newest-cni-019918) Creating domain...
	I0914 18:28:51.231681   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home/jenkins
	I0914 18:28:51.231704   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Checking permissions on dir: /home
	I0914 18:28:51.231724   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Skipping /home - not owner
	I0914 18:28:51.232979   69780 main.go:141] libmachine: (newest-cni-019918) define libvirt domain using xml: 
	I0914 18:28:51.232995   69780 main.go:141] libmachine: (newest-cni-019918) <domain type='kvm'>
	I0914 18:28:51.233001   69780 main.go:141] libmachine: (newest-cni-019918)   <name>newest-cni-019918</name>
	I0914 18:28:51.233019   69780 main.go:141] libmachine: (newest-cni-019918)   <memory unit='MiB'>2200</memory>
	I0914 18:28:51.233025   69780 main.go:141] libmachine: (newest-cni-019918)   <vcpu>2</vcpu>
	I0914 18:28:51.233029   69780 main.go:141] libmachine: (newest-cni-019918)   <features>
	I0914 18:28:51.233034   69780 main.go:141] libmachine: (newest-cni-019918)     <acpi/>
	I0914 18:28:51.233038   69780 main.go:141] libmachine: (newest-cni-019918)     <apic/>
	I0914 18:28:51.233043   69780 main.go:141] libmachine: (newest-cni-019918)     <pae/>
	I0914 18:28:51.233046   69780 main.go:141] libmachine: (newest-cni-019918)     
	I0914 18:28:51.233051   69780 main.go:141] libmachine: (newest-cni-019918)   </features>
	I0914 18:28:51.233056   69780 main.go:141] libmachine: (newest-cni-019918)   <cpu mode='host-passthrough'>
	I0914 18:28:51.233063   69780 main.go:141] libmachine: (newest-cni-019918)   
	I0914 18:28:51.233067   69780 main.go:141] libmachine: (newest-cni-019918)   </cpu>
	I0914 18:28:51.233075   69780 main.go:141] libmachine: (newest-cni-019918)   <os>
	I0914 18:28:51.233079   69780 main.go:141] libmachine: (newest-cni-019918)     <type>hvm</type>
	I0914 18:28:51.233084   69780 main.go:141] libmachine: (newest-cni-019918)     <boot dev='cdrom'/>
	I0914 18:28:51.233092   69780 main.go:141] libmachine: (newest-cni-019918)     <boot dev='hd'/>
	I0914 18:28:51.233102   69780 main.go:141] libmachine: (newest-cni-019918)     <bootmenu enable='no'/>
	I0914 18:28:51.233110   69780 main.go:141] libmachine: (newest-cni-019918)   </os>
	I0914 18:28:51.233118   69780 main.go:141] libmachine: (newest-cni-019918)   <devices>
	I0914 18:28:51.233135   69780 main.go:141] libmachine: (newest-cni-019918)     <disk type='file' device='cdrom'>
	I0914 18:28:51.233144   69780 main.go:141] libmachine: (newest-cni-019918)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/boot2docker.iso'/>
	I0914 18:28:51.233158   69780 main.go:141] libmachine: (newest-cni-019918)       <target dev='hdc' bus='scsi'/>
	I0914 18:28:51.233188   69780 main.go:141] libmachine: (newest-cni-019918)       <readonly/>
	I0914 18:28:51.233209   69780 main.go:141] libmachine: (newest-cni-019918)     </disk>
	I0914 18:28:51.233221   69780 main.go:141] libmachine: (newest-cni-019918)     <disk type='file' device='disk'>
	I0914 18:28:51.233233   69780 main.go:141] libmachine: (newest-cni-019918)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0914 18:28:51.233247   69780 main.go:141] libmachine: (newest-cni-019918)       <source file='/home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/newest-cni-019918.rawdisk'/>
	I0914 18:28:51.233255   69780 main.go:141] libmachine: (newest-cni-019918)       <target dev='hda' bus='virtio'/>
	I0914 18:28:51.233263   69780 main.go:141] libmachine: (newest-cni-019918)     </disk>
	I0914 18:28:51.233283   69780 main.go:141] libmachine: (newest-cni-019918)     <interface type='network'>
	I0914 18:28:51.233297   69780 main.go:141] libmachine: (newest-cni-019918)       <source network='mk-newest-cni-019918'/>
	I0914 18:28:51.233309   69780 main.go:141] libmachine: (newest-cni-019918)       <model type='virtio'/>
	I0914 18:28:51.233316   69780 main.go:141] libmachine: (newest-cni-019918)     </interface>
	I0914 18:28:51.233336   69780 main.go:141] libmachine: (newest-cni-019918)     <interface type='network'>
	I0914 18:28:51.233349   69780 main.go:141] libmachine: (newest-cni-019918)       <source network='default'/>
	I0914 18:28:51.233357   69780 main.go:141] libmachine: (newest-cni-019918)       <model type='virtio'/>
	I0914 18:28:51.233365   69780 main.go:141] libmachine: (newest-cni-019918)     </interface>
	I0914 18:28:51.233376   69780 main.go:141] libmachine: (newest-cni-019918)     <serial type='pty'>
	I0914 18:28:51.233384   69780 main.go:141] libmachine: (newest-cni-019918)       <target port='0'/>
	I0914 18:28:51.233393   69780 main.go:141] libmachine: (newest-cni-019918)     </serial>
	I0914 18:28:51.233401   69780 main.go:141] libmachine: (newest-cni-019918)     <console type='pty'>
	I0914 18:28:51.233412   69780 main.go:141] libmachine: (newest-cni-019918)       <target type='serial' port='0'/>
	I0914 18:28:51.233423   69780 main.go:141] libmachine: (newest-cni-019918)     </console>
	I0914 18:28:51.233433   69780 main.go:141] libmachine: (newest-cni-019918)     <rng model='virtio'>
	I0914 18:28:51.233448   69780 main.go:141] libmachine: (newest-cni-019918)       <backend model='random'>/dev/random</backend>
	I0914 18:28:51.233458   69780 main.go:141] libmachine: (newest-cni-019918)     </rng>
	I0914 18:28:51.233468   69780 main.go:141] libmachine: (newest-cni-019918)     
	I0914 18:28:51.233481   69780 main.go:141] libmachine: (newest-cni-019918)     
	I0914 18:28:51.233493   69780 main.go:141] libmachine: (newest-cni-019918)   </devices>
	I0914 18:28:51.233507   69780 main.go:141] libmachine: (newest-cni-019918) </domain>
	I0914 18:28:51.233521   69780 main.go:141] libmachine: (newest-cni-019918) 
	I0914 18:28:51.239084   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:3a:2a:9d in network default
	I0914 18:28:51.239959   69780 main.go:141] libmachine: (newest-cni-019918) Ensuring networks are active...
	I0914 18:28:51.239984   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:51.240955   69780 main.go:141] libmachine: (newest-cni-019918) Ensuring network default is active
	I0914 18:28:51.241526   69780 main.go:141] libmachine: (newest-cni-019918) Ensuring network mk-newest-cni-019918 is active
	I0914 18:28:51.242289   69780 main.go:141] libmachine: (newest-cni-019918) Getting domain xml...
	I0914 18:28:51.243248   69780 main.go:141] libmachine: (newest-cni-019918) Creating domain...
	I0914 18:28:52.530505   69780 main.go:141] libmachine: (newest-cni-019918) Waiting to get IP...
	I0914 18:28:52.531371   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:52.531823   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:52.531883   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:52.531823   69803 retry.go:31] will retry after 276.17619ms: waiting for machine to come up
	I0914 18:28:52.809282   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:52.809800   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:52.809829   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:52.809732   69803 retry.go:31] will retry after 260.99165ms: waiting for machine to come up
	I0914 18:28:53.072213   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:53.072626   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:53.072647   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:53.072567   69803 retry.go:31] will retry after 395.186049ms: waiting for machine to come up
	I0914 18:28:53.469125   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:53.469577   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:53.469604   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:53.469520   69803 retry.go:31] will retry after 554.460886ms: waiting for machine to come up
	I0914 18:28:54.025242   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:54.025717   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:54.025742   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:54.025658   69803 retry.go:31] will retry after 502.338117ms: waiting for machine to come up
	I0914 18:28:54.529453   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:54.530086   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:54.530135   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:54.530054   69803 retry.go:31] will retry after 813.538746ms: waiting for machine to come up
	I0914 18:28:55.344970   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:55.345420   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:55.345449   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:55.345365   69803 retry.go:31] will retry after 972.089717ms: waiting for machine to come up
	I0914 18:28:56.319049   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:56.319507   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:56.319535   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:56.319451   69803 retry.go:31] will retry after 989.270051ms: waiting for machine to come up
	I0914 18:28:57.310616   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:57.311094   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:57.311122   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:57.311051   69803 retry.go:31] will retry after 1.411531076s: waiting for machine to come up
	I0914 18:28:58.723941   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:28:58.724521   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:28:58.724547   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:28:58.724478   69803 retry.go:31] will retry after 2.144376373s: waiting for machine to come up
	I0914 18:29:00.870517   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:00.870910   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:00.870935   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:00.870880   69803 retry.go:31] will retry after 2.623436153s: waiting for machine to come up
	I0914 18:29:03.495443   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:03.495954   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:03.495991   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:03.495917   69803 retry.go:31] will retry after 3.523687503s: waiting for machine to come up
	I0914 18:29:07.021023   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:07.021465   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:07.021495   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:07.021395   69803 retry.go:31] will retry after 2.865633365s: waiting for machine to come up
	I0914 18:29:09.888889   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:09.889365   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find current IP address of domain newest-cni-019918 in network mk-newest-cni-019918
	I0914 18:29:09.889394   69780 main.go:141] libmachine: (newest-cni-019918) DBG | I0914 18:29:09.889316   69803 retry.go:31] will retry after 4.971757917s: waiting for machine to come up
	I0914 18:29:14.863206   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:14.863662   69780 main.go:141] libmachine: (newest-cni-019918) Found IP for machine: 192.168.72.152
	I0914 18:29:14.863690   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has current primary IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:14.863700   69780 main.go:141] libmachine: (newest-cni-019918) Reserving static IP address...
	I0914 18:29:14.864014   69780 main.go:141] libmachine: (newest-cni-019918) DBG | unable to find host DHCP lease matching {name: "newest-cni-019918", mac: "52:54:00:f8:a8:64", ip: "192.168.72.152"} in network mk-newest-cni-019918
	I0914 18:29:14.943692   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Getting to WaitForSSH function...
	I0914 18:29:14.943726   69780 main.go:141] libmachine: (newest-cni-019918) Reserved static IP address: 192.168.72.152
	I0914 18:29:14.943738   69780 main.go:141] libmachine: (newest-cni-019918) Waiting for SSH to be available...
	I0914 18:29:14.946496   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:14.946950   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:14.946985   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:14.947106   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Using SSH client type: external
	I0914 18:29:14.947137   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa (-rw-------)
	I0914 18:29:14.947160   69780 main.go:141] libmachine: (newest-cni-019918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:29:14.947173   69780 main.go:141] libmachine: (newest-cni-019918) DBG | About to run SSH command:
	I0914 18:29:14.947182   69780 main.go:141] libmachine: (newest-cni-019918) DBG | exit 0
	I0914 18:29:15.074363   69780 main.go:141] libmachine: (newest-cni-019918) DBG | SSH cmd err, output: <nil>: 
	I0914 18:29:15.074653   69780 main.go:141] libmachine: (newest-cni-019918) KVM machine creation complete!
	I0914 18:29:15.074963   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetConfigRaw
	I0914 18:29:15.075525   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:15.075762   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:15.075952   69780 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0914 18:29:15.075967   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetState
	I0914 18:29:15.077196   69780 main.go:141] libmachine: Detecting operating system of created instance...
	I0914 18:29:15.077211   69780 main.go:141] libmachine: Waiting for SSH to be available...
	I0914 18:29:15.077217   69780 main.go:141] libmachine: Getting to WaitForSSH function...
	I0914 18:29:15.077226   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.079658   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.080105   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.080126   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.080291   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:15.080487   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.080631   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.080776   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:15.080959   69780 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:15.081225   69780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0914 18:29:15.081240   69780 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0914 18:29:15.185448   69780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:29:15.185469   69780 main.go:141] libmachine: Detecting the provisioner...
	I0914 18:29:15.185484   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.188457   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.188796   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.188819   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.189015   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:15.189190   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.189372   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.189492   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:15.189620   69780 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:15.189824   69780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0914 18:29:15.189837   69780 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0914 18:29:15.294542   69780 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0914 18:29:15.294666   69780 main.go:141] libmachine: found compatible host: buildroot
	I0914 18:29:15.294681   69780 main.go:141] libmachine: Provisioning with buildroot...
	I0914 18:29:15.294693   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetMachineName
	I0914 18:29:15.294930   69780 buildroot.go:166] provisioning hostname "newest-cni-019918"
	I0914 18:29:15.294958   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetMachineName
	I0914 18:29:15.295091   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.297870   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.298318   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.298359   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.298542   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:15.298751   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.298869   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.298979   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:15.299112   69780 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:15.299288   69780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0914 18:29:15.299305   69780 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-019918 && echo "newest-cni-019918" | sudo tee /etc/hostname
	I0914 18:29:15.421173   69780 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-019918
	
	I0914 18:29:15.421225   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.424059   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.424445   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.424476   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.424672   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:15.424864   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.425022   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.425126   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:15.425281   69780 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:15.425524   69780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0914 18:29:15.425550   69780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-019918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-019918/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-019918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:29:15.538716   69780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:29:15.538745   69780 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:29:15.538801   69780 buildroot.go:174] setting up certificates
	I0914 18:29:15.538829   69780 provision.go:84] configureAuth start
	I0914 18:29:15.538847   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetMachineName
	I0914 18:29:15.539117   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetIP
	I0914 18:29:15.541828   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.542201   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.542229   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.542486   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.544745   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.545066   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.545096   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.545268   69780 provision.go:143] copyHostCerts
	I0914 18:29:15.545354   69780 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:29:15.545366   69780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:29:15.545447   69780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:29:15.545572   69780 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:29:15.545582   69780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:29:15.545616   69780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:29:15.545700   69780 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:29:15.545712   69780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:29:15.545752   69780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:29:15.545825   69780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.newest-cni-019918 san=[127.0.0.1 192.168.72.152 localhost minikube newest-cni-019918]
	I0914 18:29:15.676892   69780 provision.go:177] copyRemoteCerts
	I0914 18:29:15.676973   69780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:29:15.677002   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.679675   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.679975   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.680002   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.680229   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:15.680415   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.680575   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:15.680692   69780 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa Username:docker}
	I0914 18:29:15.764186   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:29:15.788765   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:29:15.814883   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:29:15.839708   69780 provision.go:87] duration metric: took 300.863979ms to configureAuth
	I0914 18:29:15.839734   69780 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:29:15.839911   69780 config.go:182] Loaded profile config "newest-cni-019918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:29:15.839988   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:15.843490   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.843918   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:15.843950   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:15.844105   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:15.844303   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.844474   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:15.844575   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:15.844728   69780 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:15.844940   69780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0914 18:29:15.844969   69780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:29:16.067444   69780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:29:16.067471   69780 main.go:141] libmachine: Checking connection to Docker...
	I0914 18:29:16.067480   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetURL
	I0914 18:29:16.068786   69780 main.go:141] libmachine: (newest-cni-019918) DBG | Using libvirt version 6000000
	I0914 18:29:16.070999   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.071346   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.071378   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.071545   69780 main.go:141] libmachine: Docker is up and running!
	I0914 18:29:16.071570   69780 main.go:141] libmachine: Reticulating splines...
	I0914 18:29:16.071588   69780 client.go:171] duration metric: took 25.269133595s to LocalClient.Create
	I0914 18:29:16.071617   69780 start.go:167] duration metric: took 25.269203092s to libmachine.API.Create "newest-cni-019918"
	I0914 18:29:16.071628   69780 start.go:293] postStartSetup for "newest-cni-019918" (driver="kvm2")
	I0914 18:29:16.071641   69780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:29:16.071665   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:16.071922   69780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:29:16.071952   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:16.074214   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.074556   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.074584   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.074700   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:16.074903   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:16.075097   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:16.075274   69780 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa Username:docker}
	I0914 18:29:16.160551   69780 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:29:16.165053   69780 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:29:16.165076   69780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:29:16.165133   69780 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:29:16.165237   69780 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:29:16.165324   69780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:29:16.174649   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:29:16.200155   69780 start.go:296] duration metric: took 128.511058ms for postStartSetup
	I0914 18:29:16.200214   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetConfigRaw
	I0914 18:29:16.200883   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetIP
	I0914 18:29:16.203875   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.204401   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.204440   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.204703   69780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/config.json ...
	I0914 18:29:16.204894   69780 start.go:128] duration metric: took 25.422507399s to createHost
	I0914 18:29:16.204920   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:16.207688   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.208151   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.208182   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.208411   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:16.208600   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:16.208741   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:16.208863   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:16.209025   69780 main.go:141] libmachine: Using SSH client type: native
	I0914 18:29:16.209207   69780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I0914 18:29:16.209218   69780 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:29:16.319403   69780 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726338556.299275414
	
	I0914 18:29:16.319426   69780 fix.go:216] guest clock: 1726338556.299275414
	I0914 18:29:16.319435   69780 fix.go:229] Guest: 2024-09-14 18:29:16.299275414 +0000 UTC Remote: 2024-09-14 18:29:16.204906279 +0000 UTC m=+25.532537387 (delta=94.369135ms)
	I0914 18:29:16.319459   69780 fix.go:200] guest clock delta is within tolerance: 94.369135ms
	I0914 18:29:16.319467   69780 start.go:83] releasing machines lock for "newest-cni-019918", held for 25.53720361s
	I0914 18:29:16.319493   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:16.319777   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetIP
	I0914 18:29:16.322812   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.323223   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.323252   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.323389   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:16.323938   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:16.324134   69780 main.go:141] libmachine: (newest-cni-019918) Calling .DriverName
	I0914 18:29:16.324234   69780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:29:16.324286   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:16.324362   69780 ssh_runner.go:195] Run: cat /version.json
	I0914 18:29:16.324390   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHHostname
	I0914 18:29:16.327228   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.327429   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.327689   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.327713   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.327740   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:16.327754   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:16.328032   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:16.328242   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:16.328255   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHPort
	I0914 18:29:16.328386   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:16.328406   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHKeyPath
	I0914 18:29:16.328494   69780 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa Username:docker}
	I0914 18:29:16.328566   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetSSHUsername
	I0914 18:29:16.328686   69780 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/newest-cni-019918/id_rsa Username:docker}
	I0914 18:29:16.434336   69780 ssh_runner.go:195] Run: systemctl --version
	I0914 18:29:16.440458   69780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:29:16.599991   69780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:29:16.606888   69780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:29:16.607037   69780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:29:16.623978   69780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:29:16.624005   69780 start.go:495] detecting cgroup driver to use...
	I0914 18:29:16.624080   69780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:29:16.640227   69780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:29:16.655312   69780 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:29:16.655366   69780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:29:16.669713   69780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:29:16.684083   69780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:29:16.808793   69780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:29:16.948688   69780 docker.go:233] disabling docker service ...
	I0914 18:29:16.948765   69780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:29:16.964098   69780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:29:16.977593   69780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:29:17.114105   69780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:29:17.252252   69780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:29:17.266511   69780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:29:17.285853   69780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:29:17.285925   69780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.296396   69780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:29:17.296473   69780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.307115   69780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.318153   69780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.329219   69780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:29:17.340110   69780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.351840   69780 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.368920   69780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:29:17.380496   69780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:29:17.390104   69780 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:29:17.390191   69780 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:29:17.403120   69780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:29:17.413295   69780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:29:17.549706   69780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:29:17.640772   69780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:29:17.640839   69780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:29:17.645203   69780 start.go:563] Will wait 60s for crictl version
	I0914 18:29:17.645287   69780 ssh_runner.go:195] Run: which crictl
	I0914 18:29:17.648819   69780 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:29:17.689762   69780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:29:17.689859   69780 ssh_runner.go:195] Run: crio --version
	I0914 18:29:17.719949   69780 ssh_runner.go:195] Run: crio --version
	I0914 18:29:17.748804   69780 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:29:17.749934   69780 main.go:141] libmachine: (newest-cni-019918) Calling .GetIP
	I0914 18:29:17.752300   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:17.752663   69780 main.go:141] libmachine: (newest-cni-019918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a8:64", ip: ""} in network mk-newest-cni-019918: {Iface:virbr1 ExpiryTime:2024-09-14 19:29:04 +0000 UTC Type:0 Mac:52:54:00:f8:a8:64 Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:newest-cni-019918 Clientid:01:52:54:00:f8:a8:64}
	I0914 18:29:17.752690   69780 main.go:141] libmachine: (newest-cni-019918) DBG | domain newest-cni-019918 has defined IP address 192.168.72.152 and MAC address 52:54:00:f8:a8:64 in network mk-newest-cni-019918
	I0914 18:29:17.752906   69780 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0914 18:29:17.756973   69780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:29:17.772213   69780 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0914 18:29:17.773490   69780 kubeadm.go:883] updating cluster {Name:newest-cni-019918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-019918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:29:17.773617   69780 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:29:17.773679   69780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:29:17.806874   69780 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:29:17.806945   69780 ssh_runner.go:195] Run: which lz4
	I0914 18:29:17.811532   69780 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:29:17.815600   69780 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:29:17.815634   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:29:19.080283   69780 crio.go:462] duration metric: took 1.268800282s to copy over tarball
	I0914 18:29:19.080373   69780 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:29:21.089800   69780 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.009398295s)
	I0914 18:29:21.089830   69780 crio.go:469] duration metric: took 2.009515284s to extract the tarball
	I0914 18:29:21.089839   69780 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:29:21.127649   69780 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:29:21.173441   69780 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:29:21.173462   69780 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:29:21.173469   69780 kubeadm.go:934] updating node { 192.168.72.152 8443 v1.31.1 crio true true} ...
	I0914 18:29:21.173565   69780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-019918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-019918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:29:21.173634   69780 ssh_runner.go:195] Run: crio config
	I0914 18:29:21.225010   69780 cni.go:84] Creating CNI manager for ""
	I0914 18:29:21.225031   69780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:29:21.225040   69780 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0914 18:29:21.225060   69780 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.152 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-019918 NodeName:newest-cni-019918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:29:21.225188   69780 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-019918"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:29:21.225262   69780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:29:21.236201   69780 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:29:21.236273   69780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:29:21.246079   69780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0914 18:29:21.263955   69780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:29:21.281221   69780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0914 18:29:21.299081   69780 ssh_runner.go:195] Run: grep 192.168.72.152	control-plane.minikube.internal$ /etc/hosts
	I0914 18:29:21.303004   69780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:29:21.316123   69780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:29:21.433478   69780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:29:21.450756   69780 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918 for IP: 192.168.72.152
	I0914 18:29:21.450782   69780 certs.go:194] generating shared ca certs ...
	I0914 18:29:21.450803   69780 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.450994   69780 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:29:21.451056   69780 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:29:21.451072   69780 certs.go:256] generating profile certs ...
	I0914 18:29:21.451128   69780 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/client.key
	I0914 18:29:21.451142   69780 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/client.crt with IP's: []
	I0914 18:29:21.609674   69780 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/client.crt ...
	I0914 18:29:21.609705   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/client.crt: {Name:mka7be01ee5ccea852b6e83caa6a045e9a9dccd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.609880   69780 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/client.key ...
	I0914 18:29:21.609891   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/client.key: {Name:mkadfbc79cc03c51f55a7c3f16760a4e61621567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.609969   69780 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.key.5a1ba3c9
	I0914 18:29:21.609990   69780 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.crt.5a1ba3c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.152]
	I0914 18:29:21.852736   69780 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.crt.5a1ba3c9 ...
	I0914 18:29:21.852765   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.crt.5a1ba3c9: {Name:mk646aacbac221857a5c77f026b5568440735bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.852924   69780 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.key.5a1ba3c9 ...
	I0914 18:29:21.852936   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.key.5a1ba3c9: {Name:mk1c4f4dd4636087a5bc8b08df14857714c8743a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.853011   69780 certs.go:381] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.crt.5a1ba3c9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.crt
	I0914 18:29:21.853081   69780 certs.go:385] copying /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.key.5a1ba3c9 -> /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.key
	I0914 18:29:21.853134   69780 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.key
	I0914 18:29:21.853150   69780 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.crt with IP's: []
	I0914 18:29:21.934331   69780 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.crt ...
	I0914 18:29:21.934362   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.crt: {Name:mk6d35aa45ee20f7461817504ab705a9d6caf54a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.934523   69780 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.key ...
	I0914 18:29:21.934536   69780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.key: {Name:mkd439c64ec1aba1cd8734bc09e0758d898a1edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:29:21.934708   69780 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:29:21.934745   69780 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:29:21.934754   69780 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:29:21.934777   69780 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:29:21.934799   69780 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:29:21.934821   69780 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:29:21.934856   69780 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:29:21.935579   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:29:21.963732   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:29:21.988702   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:29:22.018058   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:29:22.044413   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:29:22.069781   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:29:22.097548   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:29:22.122330   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/newest-cni-019918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:29:22.147971   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:29:22.171899   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:29:22.195252   69780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:29:22.221431   69780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:29:22.245104   69780 ssh_runner.go:195] Run: openssl version
	I0914 18:29:22.252179   69780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:29:22.267419   69780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:29:22.275149   69780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:29:22.275219   69780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:29:22.281577   69780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:29:22.291755   69780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:29:22.301473   69780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:29:22.305547   69780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:29:22.305600   69780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:29:22.310744   69780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:29:22.320632   69780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:29:22.330782   69780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:29:22.334959   69780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:29:22.335016   69780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:29:22.340307   69780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:29:22.350551   69780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:29:22.354181   69780 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 18:29:22.354230   69780 kubeadm.go:392] StartCluster: {Name:newest-cni-019918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-019918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:29:22.354305   69780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:29:22.354348   69780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:29:22.397133   69780 cri.go:89] found id: ""
	I0914 18:29:22.397201   69780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:29:22.407381   69780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:29:22.416984   69780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:29:22.426595   69780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:29:22.426613   69780 kubeadm.go:157] found existing configuration files:
	
	I0914 18:29:22.426661   69780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:29:22.435028   69780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:29:22.435097   69780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:29:22.443909   69780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:29:22.452928   69780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:29:22.452993   69780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:29:22.462153   69780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:29:22.470781   69780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:29:22.470848   69780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:29:22.481664   69780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:29:22.490394   69780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:29:22.490459   69780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:29:22.500819   69780 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:29:22.617460   69780 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:29:22.617571   69780 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:29:22.713517   69780 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:29:22.713663   69780 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:29:22.713801   69780 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:29:22.724795   69780 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:29:22.727239   69780 out.go:235]   - Generating certificates and keys ...
	I0914 18:29:22.727379   69780 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:29:22.727505   69780 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:29:22.904711   69780 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 18:29:23.022359   69780 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 18:29:23.139340   69780 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 18:29:23.463711   69780 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 18:29:23.564387   69780 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 18:29:23.564681   69780 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-019918] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0914 18:29:23.775207   69780 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 18:29:23.775536   69780 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-019918] and IPs [192.168.72.152 127.0.0.1 ::1]
	I0914 18:29:23.888470   69780 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 18:29:23.962822   69780 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 18:29:24.083175   69780 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 18:29:24.083459   69780 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:29:24.645965   69780 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:29:24.756339   69780 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:29:24.859996   69780 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:29:24.989772   69780 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:29:25.191968   69780 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:29:25.192689   69780 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:29:25.196127   69780 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:29:25.198415   69780 out.go:235]   - Booting up control plane ...
	I0914 18:29:25.198525   69780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:29:25.198616   69780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:29:25.204194   69780 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:29:25.223812   69780 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:29:25.232643   69780 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:29:25.232757   69780 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:29:25.375737   69780 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:29:25.375884   69780 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:29:26.376369   69780 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001293896s
	I0914 18:29:26.376477   69780 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:29:31.376173   69780 kubeadm.go:310] [api-check] The API server is healthy after 5.001431153s
	I0914 18:29:31.389336   69780 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:29:31.414046   69780 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:29:31.452567   69780 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:29:31.452817   69780 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-019918 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:29:31.465548   69780 kubeadm.go:310] [bootstrap-token] Using token: ck0yro.40u5zsga9wo0laao
	
	
	==> CRI-O <==
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.882081014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e989e804-9ea5-4460-971e-b1e66c94f089 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.883906888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb2b0bcf-2a23-4a69-93a7-7f7a2324b2e6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.884611189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338572884567649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb2b0bcf-2a23-4a69-93a7-7f7a2324b2e6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.885399624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33828cd0-7c60-4a95-91d5-2d19dc2d1e43 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.885493339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33828cd0-7c60-4a95-91d5-2d19dc2d1e43 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.885788719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33828cd0-7c60-4a95-91d5-2d19dc2d1e43 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.927291648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ee463a0-a21f-4d63-9549-da5e32f37c8f name=/runtime.v1.RuntimeService/Version
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.927438944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ee463a0-a21f-4d63-9549-da5e32f37c8f name=/runtime.v1.RuntimeService/Version
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.929398251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3eced78-6368-40c3-a9a9-6de80c4a0978 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.929894701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338572929854922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3eced78-6368-40c3-a9a9-6de80c4a0978 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.930535432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be8364ad-907c-4ab9-87cb-856b5b4df34f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.930613806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be8364ad-907c-4ab9-87cb-856b5b4df34f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.930904725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be8364ad-907c-4ab9-87cb-856b5b4df34f name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.966894009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d823061-a28d-4f67-91f5-ec78ee5f8eee name=/runtime.v1.RuntimeService/Version
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.966983842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d823061-a28d-4f67-91f5-ec78ee5f8eee name=/runtime.v1.RuntimeService/Version
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.968175698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8da329c6-b899-420c-b2ef-6f729608024c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.968552562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338572968523354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da329c6-b899-420c-b2ef-6f729608024c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.969008622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7300cf1a-76d1-4779-8279-cde23985c46a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.969149760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7300cf1a-76d1-4779-8279-cde23985c46a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.969369752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7300cf1a-76d1-4779-8279-cde23985c46a name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.985433909Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f26f715e-9fec-4232-b80d-b840a7c43a04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.985699498Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:26715b4b5d93abb245a302a4ed80c1ebcbd492e9d5842ea27ea7bfbe8dca54f8,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-cmcz4,Uid:24cea6b3-a107-4110-ac29-88389b55bbdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337680568726038,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-cmcz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24cea6b3-a107-4110-ac29-88389b55bbdc,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T18:14:39.954880125Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:57b6d85d-fc04-42da-9452-3f24824b8377,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337680176436329,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-14T18:14:39.869863481Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nzpdb,Uid:acd2d488-301e-4d00-a17a-0e06ea5d9691,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337679883252072,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acd2d488-301e-4d00-a17a-0e06ea5d9691,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T18:14:39.561428885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&PodSandboxMetadata{Name:kube-proxy-xdj6b,Uid:d3080090-4f40-49e1-9c3e-ccc
eb37cc952,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337679861411649,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T18:14:38.945745004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qrgr9,Uid:31b611b3-d861-451f-8c17-30bed52994a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337679843511019,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,k8s-app: kube-dns,pod-templat
e-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-14T18:14:39.535694815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-168587,Uid:d763ec12cbc7e3071dad1cec3727a213,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337668840684807,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d763ec12cbc7e3071dad1cec3727a213,kubernetes.io/config.seen: 2024-09-14T18:14:28.383935371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-1
68587,Uid:e3eb20d1ab71f721a56ab5915a453cf1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337668839628861,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.38:2379,kubernetes.io/config.hash: e3eb20d1ab71f721a56ab5915a453cf1,kubernetes.io/config.seen: 2024-09-14T18:14:28.383936310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-168587,Uid:21e50b1820f085ff0a8dfee7f5214e80,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726337668816547933,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 21e50b1820f085ff0a8dfee7f5214e80,kubernetes.io/config.seen: 2024-09-14T18:14:28.383934013Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-168587,Uid:b2086d13597e7c6d9765a66af4193169,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726337668811727472,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.38:8443,
kubernetes.io/config.hash: b2086d13597e7c6d9765a66af4193169,kubernetes.io/config.seen: 2024-09-14T18:14:28.383929767Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-168587,Uid:b2086d13597e7c6d9765a66af4193169,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726337383605192550,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.38:8443,kubernetes.io/config.hash: b2086d13597e7c6d9765a66af4193169,kubernetes.io/config.seen: 2024-09-14T18:09:43.123613246Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/intercep
tors.go:74" id=f26f715e-9fec-4232-b80d-b840a7c43a04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.986432375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aff1e549-35b9-4f0a-9c49-7b5af9e9bdb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.986487233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aff1e549-35b9-4f0a-9c49-7b5af9e9bdb7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:29:32 no-preload-168587 crio[707]: time="2024-09-14 18:29:32.986696110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5,PodSandboxId:8acc590924839c9c21b63258dc7a84ee1142419a3d2da023aea8e27f4aeb6f08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726337680429901464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b6d85d-fc04-42da-9452-3f24824b8377,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe,PodSandboxId:1d79c7b4b2a16ebfe4d4525b7b629b785c9aab7528a8e48c0027baf882dc028a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680409230228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qrgr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b611b3-d861-451f-8c17-30bed52994a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d,PodSandboxId:a23867fe0fc270298a52bdd674447946ce4f111b956833f35302b7278b86c368,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726337680314085708,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nzpdb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
d2d488-301e-4d00-a17a-0e06ea5d9691,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751,PodSandboxId:ef42eca30406505665370799dde6c81e80f47bab6db2e3d116cfe42f9b232b06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726337680207515694,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xdj6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3080090-4f40-49e1-9c3e-ccceb37cc952,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5,PodSandboxId:340d402dc2fd092e77871f6158ea373c89210922054918530870656d3eb0a518,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726337669078810507,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3eb20d1ab71f721a56ab5915a453cf1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2,PodSandboxId:71c69668af05fb2326b385785b321ab030c73efe64f54ec927e6949e75a54b1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726337669015880668,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff,PodSandboxId:a3c494d523c1990093f3bb667782a077a957e4b61f226e04383cc07c70ae8784,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726337669014676053,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d763ec12cbc7e3071dad1cec3727a213,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf,PodSandboxId:1a3dc35d32452d59ef7fa5862f1818bfa7557ce9e7b6de0a736f2aefda6c3684,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726337668970001781,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21e50b1820f085ff0a8dfee7f5214e80,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c,PodSandboxId:29e5c57da77a7868e0b7af65eb646d0f1b15877f520654ab8a39e9a6d1145216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726337383835129496,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-168587,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2086d13597e7c6d9765a66af4193169,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aff1e549-35b9-4f0a-9c49-7b5af9e9bdb7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cfaed3fc943fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   8acc590924839       storage-provisioner
	2f9d600e4a1dd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   1d79c7b4b2a16       coredns-7c65d6cfc9-qrgr9
	95a7091fc8692       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   a23867fe0fc27       coredns-7c65d6cfc9-nzpdb
	feceee2bacff4       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   ef42eca304065       kube-proxy-xdj6b
	1e840de5726f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   340d402dc2fd0       etcd-no-preload-168587
	5ce526810d6c8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   71c69668af05f       kube-apiserver-no-preload-168587
	0197ffbc2979d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   a3c494d523c19       kube-scheduler-no-preload-168587
	f5ee6161f59dd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   1a3dc35d32452       kube-controller-manager-no-preload-168587
	daf75a8555098       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   29e5c57da77a7       kube-apiserver-no-preload-168587
	
	
	==> coredns [2f9d600e4a1dd9cf36a64408a6c099fa4e7dd7d2ec671638fdcb81460d530efe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [95a7091fc8692fc8d77f2e7a0b62c45f40efa5364223130ee17c6feb309d604d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-168587
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-168587
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=no-preload-168587
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:14:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-168587
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:29:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:24:58 +0000   Sat, 14 Sep 2024 18:14:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:24:58 +0000   Sat, 14 Sep 2024 18:14:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:24:58 +0000   Sat, 14 Sep 2024 18:14:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:24:58 +0000   Sat, 14 Sep 2024 18:14:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    no-preload-168587
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdba1f0c25954cbfa58478c74a6c95ca
	  System UUID:                fdba1f0c-2595-4cbf-a584-78c74a6c95ca
	  Boot ID:                    de44ce6f-ef46-437b-b02c-11b6fc1227ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-nzpdb                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-qrgr9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-168587                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-168587             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-168587    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-xdj6b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-168587             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-cmcz4              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-168587 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-168587 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-168587 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-168587 event: Registered Node no-preload-168587 in Controller
	
	
	==> dmesg <==
	[  +0.037593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.958316] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.912642] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.462754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.346974] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.061166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064543] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.225904] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.135807] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.283852] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[ +15.300876] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.064189] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.675224] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +3.939778] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.200936] kauditd_printk_skb: 57 callbacks suppressed
	[Sep14 18:10] kauditd_printk_skb: 28 callbacks suppressed
	[Sep14 18:14] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.403616] systemd-fstab-generator[3004]: Ignoring "noauto" option for root device
	[  +4.857084] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.025689] systemd-fstab-generator[3326]: Ignoring "noauto" option for root device
	[  +4.346195] systemd-fstab-generator[3428]: Ignoring "noauto" option for root device
	[  +0.094058] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.743720] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1e840de5726f1c2bfcfbd50bd5ee12dcad4eb9761ec850513c1c9642ea3842f5] <==
	{"level":"info","ts":"2024-09-14T18:14:30.156780Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T18:14:30.156887Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T18:14:30.156913Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T18:14:30.157575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T18:14:30.158321Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2024-09-14T18:24:30.190183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-09-14T18:24:30.199132Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":687,"took":"8.530562ms","hash":3383118684,"current-db-size-bytes":2277376,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2277376,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-14T18:24:30.199216Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3383118684,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T18:29:23.703259Z","caller":"traceutil/trace.go:171","msg":"trace[1420586963] linearizableReadLoop","detail":"{readStateIndex:1360; appliedIndex:1359; }","duration":"224.81403ms","start":"2024-09-14T18:29:23.478409Z","end":"2024-09-14T18:29:23.703223Z","steps":["trace[1420586963] 'read index received'  (duration: 224.60929ms)","trace[1420586963] 'applied index is now lower than readState.Index'  (duration: 204.135µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T18:29:23.703552Z","caller":"traceutil/trace.go:171","msg":"trace[515719430] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"276.987247ms","start":"2024-09-14T18:29:23.426545Z","end":"2024-09-14T18:29:23.703532Z","steps":["trace[515719430] 'process raft request'  (duration: 276.564981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:29:23.703708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.223161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:29:23.704773Z","caller":"traceutil/trace.go:171","msg":"trace[2042393005] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"226.377907ms","start":"2024-09-14T18:29:23.478382Z","end":"2024-09-14T18:29:23.704760Z","steps":["trace[2042393005] 'agreement among raft nodes before linearized reading'  (duration: 225.199409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:29:23.704461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.492832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.38\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-09-14T18:29:23.704932Z","caller":"traceutil/trace.go:171","msg":"trace[778436712] range","detail":"{range_begin:/registry/masterleases/192.168.39.38; range_end:; response_count:1; response_revision:1169; }","duration":"158.972466ms","start":"2024-09-14T18:29:23.545953Z","end":"2024-09-14T18:29:23.704925Z","steps":["trace[778436712] 'agreement among raft nodes before linearized reading'  (duration: 158.428463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:29:23.704491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.343114ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:29:23.705116Z","caller":"traceutil/trace.go:171","msg":"trace[328181766] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1169; }","duration":"152.970492ms","start":"2024-09-14T18:29:23.552139Z","end":"2024-09-14T18:29:23.705110Z","steps":["trace[328181766] 'agreement among raft nodes before linearized reading'  (duration: 152.337025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T18:29:24.083842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.857431ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16202423076885429916 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:60da91f1be58fe9b>","response":"size:40"}
	{"level":"warn","ts":"2024-09-14T18:29:24.083929Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T18:29:23.706503Z","time spent":"377.416464ms","remote":"127.0.0.1:59092","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-09-14T18:29:24.340832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.783118ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16202423076885429917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.38\" mod_revision:1162 > success:<request_put:<key:\"/registry/masterleases/192.168.39.38\" value_size:66 lease:6979051040030654107 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.38\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-14T18:29:24.341214Z","caller":"traceutil/trace.go:171","msg":"trace[1196290945] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"256.199524ms","start":"2024-09-14T18:29:24.084999Z","end":"2024-09-14T18:29:24.341199Z","steps":["trace[1196290945] 'process raft request'  (duration: 125.502625ms)","trace[1196290945] 'compare'  (duration: 129.670152ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-14T18:29:24.601093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.057765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T18:29:24.601216Z","caller":"traceutil/trace.go:171","msg":"trace[991541823] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1170; }","duration":"122.259838ms","start":"2024-09-14T18:29:24.478943Z","end":"2024-09-14T18:29:24.601203Z","steps":["trace[991541823] 'range keys from in-memory index tree'  (duration: 121.98236ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T18:29:30.197879Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2024-09-14T18:29:30.202422Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":931,"took":"3.883139ms","hash":2564102347,"current-db-size-bytes":2277376,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-14T18:29:30.202529Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2564102347,"revision":931,"compact-revision":687}
	
	
	==> kernel <==
	 18:29:33 up 20 min,  0 users,  load average: 0.33, 0.28, 0.21
	Linux no-preload-168587 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5ce526810d6c8eea17470493ff66c9f49a70886febdf256562b75aa84d8444b2] <==
	I0914 18:25:32.694056       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:25:32.695289       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:27:32.694312       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:27:32.694473       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:27:32.696422       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:27:32.696549       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 18:27:32.696675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:27:32.697764       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0914 18:29:31.693603       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:31.693762       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0914 18:29:32.695883       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:32.695918       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0914 18:29:32.696079       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 18:29:32.696174       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0914 18:29:32.697687       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:29:32.697761       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [daf75a85550981793df6004b119a63cf610a35685a49a69dd7a91ec0c826055c] <==
	W0914 18:14:23.489006       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.498987       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.571245       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.587457       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.593977       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.625618       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.636275       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.643644       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.674729       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.729985       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.765244       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.771756       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.817563       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.861374       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.902500       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.903889       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.953608       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:23.993427       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.020306       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.057988       1 logging.go:55] [core] [Channel #16 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.282323       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.444894       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.629546       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:24.760708       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 18:14:25.938002       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f5ee6161f59dddd87a981b174e9ed7d96412afb9c9ebb2bc51c9f1cc36ee11cf] <==
	E0914 18:24:08.763679       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:24:09.244856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:24:38.770742       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:24:39.253856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:24:58.491353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-168587"
	E0914 18:25:08.776775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:25:09.267019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:25:38.783436       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:25:39.275341       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0914 18:25:48.738646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="178.471µs"
	I0914 18:26:03.729835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="64.41µs"
	E0914 18:26:08.789653       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:26:09.282401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:26:38.796777       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:26:39.290463       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:27:08.803401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:27:09.305257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:27:38.810361       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:27:39.313949       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:28:08.817272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:28:09.323941       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:28:38.824323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:28:39.331966       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0914 18:29:08.830540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0914 18:29:09.350686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [feceee2bacff40291f6daff2ccdc08e3e51bd6da7fcc93d21080c7227693e751] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0914 18:14:40.809480       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0914 18:14:40.823253       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.38"]
	E0914 18:14:40.823466       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 18:14:40.937946       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0914 18:14:40.937985       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 18:14:40.938008       1 server_linux.go:169] "Using iptables Proxier"
	I0914 18:14:40.950712       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 18:14:40.954331       1 server.go:483] "Version info" version="v1.31.1"
	I0914 18:14:40.954433       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 18:14:40.956875       1 config.go:199] "Starting service config controller"
	I0914 18:14:40.956981       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 18:14:40.957087       1 config.go:105] "Starting endpoint slice config controller"
	I0914 18:14:40.957121       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 18:14:40.957908       1 config.go:328] "Starting node config controller"
	I0914 18:14:40.957952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 18:14:41.057897       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 18:14:41.057957       1 shared_informer.go:320] Caches are synced for service config
	I0914 18:14:41.057982       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0197ffbc2979d4120aae294136ad27f9c345ca48e8355273231be9ae7240f7ff] <==
	W0914 18:14:32.631580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:14:32.632344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.675625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:14:32.675738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.682387       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:14:32.683804       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 18:14:32.757403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:14:32.757536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.793006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 18:14:32.793161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.875307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:14:32.875470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.921891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 18:14:32.922089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:32.994100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 18:14:32.994216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.047411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:14:33.047559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.069314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:14:33.069417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.093371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:14:33.093468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 18:14:33.093547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:14:33.093605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0914 18:14:35.813371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 18:28:18 no-preload-168587 kubelet[3332]: E0914 18:28:18.722493    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:28:25 no-preload-168587 kubelet[3332]: E0914 18:28:25.000435    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338504999960528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:25 no-preload-168587 kubelet[3332]: E0914 18:28:25.000846    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338504999960528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:31 no-preload-168587 kubelet[3332]: E0914 18:28:31.715521    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:28:34 no-preload-168587 kubelet[3332]: E0914 18:28:34.744896    3332 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 14 18:28:34 no-preload-168587 kubelet[3332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 14 18:28:34 no-preload-168587 kubelet[3332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 14 18:28:34 no-preload-168587 kubelet[3332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 14 18:28:34 no-preload-168587 kubelet[3332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 14 18:28:35 no-preload-168587 kubelet[3332]: E0914 18:28:35.003632    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338515002935540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:35 no-preload-168587 kubelet[3332]: E0914 18:28:35.003795    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338515002935540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:42 no-preload-168587 kubelet[3332]: E0914 18:28:42.716388    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:28:45 no-preload-168587 kubelet[3332]: E0914 18:28:45.006784    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338525006221836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:45 no-preload-168587 kubelet[3332]: E0914 18:28:45.006834    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338525006221836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:55 no-preload-168587 kubelet[3332]: E0914 18:28:55.008782    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338535008472867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:55 no-preload-168587 kubelet[3332]: E0914 18:28:55.008814    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338535008472867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:28:55 no-preload-168587 kubelet[3332]: E0914 18:28:55.715251    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:29:05 no-preload-168587 kubelet[3332]: E0914 18:29:05.010479    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338545010008106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:05 no-preload-168587 kubelet[3332]: E0914 18:29:05.010956    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338545010008106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:06 no-preload-168587 kubelet[3332]: E0914 18:29:06.715561    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:29:15 no-preload-168587 kubelet[3332]: E0914 18:29:15.013821    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338555013295404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:15 no-preload-168587 kubelet[3332]: E0914 18:29:15.014286    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338555013295404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:19 no-preload-168587 kubelet[3332]: E0914 18:29:19.715874    3332 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cmcz4" podUID="24cea6b3-a107-4110-ac29-88389b55bbdc"
	Sep 14 18:29:25 no-preload-168587 kubelet[3332]: E0914 18:29:25.016590    3332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338565016095805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 18:29:25 no-preload-168587 kubelet[3332]: E0914 18:29:25.016624    3332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338565016095805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [cfaed3fc943fc19b68f4391b6cea58d5b9e862d6e30de59cece475d8eadcbab5] <==
	I0914 18:14:40.704269       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:14:40.729132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:14:40.729213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:14:40.749413       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:14:40.749739       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-168587_611d2d49-08a6-4397-8515-7b32453c843a!
	I0914 18:14:40.761012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af481965-643e-4ba6-8fdf-07b2d1db4d95", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-168587_611d2d49-08a6-4397-8515-7b32453c843a became leader
	I0914 18:14:40.850628       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-168587_611d2d49-08a6-4397-8515-7b32453c843a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-168587 -n no-preload-168587
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-168587 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cmcz4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-168587 describe pod metrics-server-6867b74b74-cmcz4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-168587 describe pod metrics-server-6867b74b74-cmcz4: exit status 1 (70.516737ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cmcz4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-168587 describe pod metrics-server-6867b74b74-cmcz4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (342.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (171.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
E0914 18:26:45.626012   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.80:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (233.448412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-556121" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-556121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-556121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.567µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-556121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (221.914389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-556121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-556121 logs -n 25: (1.677721863s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-319416                              | stopped-upgrade-319416       | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-168587             | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC | 14 Sep 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-044534            | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC | 14 Sep 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:02 UTC | 14 Sep 24 18:02 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-470019                           | kubernetes-upgrade-470019    | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	| delete  | -p                                                     | disable-driver-mounts-444413 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | disable-driver-mounts-444413                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:03 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-556121        | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-168587                  | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-168587                                   | no-preload-168587            | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC | 14 Sep 24 18:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-044534                 | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-044534                                  | embed-certs-044534           | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-243449  | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC | 14 Sep 24 18:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:04 UTC |                     |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556121             | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC | 14 Sep 24 18:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-556121                              | old-k8s-version-556121       | jenkins | v1.34.0 | 14 Sep 24 18:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-243449       | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-243449 | jenkins | v1.34.0 | 14 Sep 24 18:06 UTC | 14 Sep 24 18:13 UTC |
	|         | default-k8s-diff-port-243449                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:06:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:06:40.299903   63448 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:40.300039   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300049   63448 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:40.300054   63448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:40.300240   63448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 18:06:40.300801   63448 out.go:352] Setting JSON to false
	I0914 18:06:40.301779   63448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6544,"bootTime":1726330656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:06:40.301879   63448 start.go:139] virtualization: kvm guest
	I0914 18:06:40.303963   63448 out.go:177] * [default-k8s-diff-port-243449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:06:40.305394   63448 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:06:40.305429   63448 notify.go:220] Checking for updates...
	I0914 18:06:40.308148   63448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:06:40.309226   63448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:06:40.310360   63448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 18:06:40.311509   63448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:06:40.312543   63448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:06:40.314418   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:06:40.315063   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.315154   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.330033   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0914 18:06:40.330502   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.331014   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.331035   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.331372   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.331519   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.331729   63448 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:06:40.332043   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:06:40.332089   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:06:40.346598   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0914 18:06:40.347021   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:06:40.347501   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:06:40.347536   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:06:40.347863   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:06:40.348042   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:06:40.380416   63448 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:06:40.381578   63448 start.go:297] selected driver: kvm2
	I0914 18:06:40.381589   63448 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.381693   63448 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:06:40.382390   63448 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.382478   63448 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:06:40.397521   63448 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 18:06:40.397921   63448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:06:40.397959   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:06:40.398002   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:06:40.398040   63448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:06:40.398145   63448 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:06:40.399920   63448 out.go:177] * Starting "default-k8s-diff-port-243449" primary control-plane node in "default-k8s-diff-port-243449" cluster
	I0914 18:06:39.170425   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:40.400913   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:06:40.400954   63448 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 18:06:40.400966   63448 cache.go:56] Caching tarball of preloaded images
	I0914 18:06:40.401038   63448 preload.go:172] Found /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0914 18:06:40.401055   63448 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 18:06:40.401185   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:06:40.401421   63448 start.go:360] acquireMachinesLock for default-k8s-diff-port-243449: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:06:45.250426   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:48.322531   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:54.402441   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:06:57.474440   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:03.554541   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:06.626472   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:12.706430   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:15.778448   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:21.858453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:24.930473   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:31.010432   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:34.082423   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:40.162417   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:43.234501   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:49.314533   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:52.386453   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:07:58.466444   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:01.538476   62207 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0914 18:08:04.546206   62554 start.go:364] duration metric: took 3m59.524513317s to acquireMachinesLock for "embed-certs-044534"
	I0914 18:08:04.546263   62554 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:04.546275   62554 fix.go:54] fixHost starting: 
	I0914 18:08:04.546585   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:04.546636   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:04.562182   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0914 18:08:04.562704   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:04.563264   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:08:04.563300   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:04.563714   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:04.563947   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:04.564131   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:08:04.566043   62554 fix.go:112] recreateIfNeeded on embed-certs-044534: state=Stopped err=<nil>
	I0914 18:08:04.566073   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	W0914 18:08:04.566289   62554 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:04.567993   62554 out.go:177] * Restarting existing kvm2 VM for "embed-certs-044534" ...
	I0914 18:08:04.570182   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Start
	I0914 18:08:04.570431   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring networks are active...
	I0914 18:08:04.571374   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network default is active
	I0914 18:08:04.571748   62554 main.go:141] libmachine: (embed-certs-044534) Ensuring network mk-embed-certs-044534 is active
	I0914 18:08:04.572124   62554 main.go:141] libmachine: (embed-certs-044534) Getting domain xml...
	I0914 18:08:04.572852   62554 main.go:141] libmachine: (embed-certs-044534) Creating domain...
	I0914 18:08:04.540924   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:04.540957   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541310   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:08:04.541335   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:08:04.541586   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:08:04.546055   62207 machine.go:96] duration metric: took 4m34.63489942s to provisionDockerMachine
	I0914 18:08:04.546096   62207 fix.go:56] duration metric: took 4m34.662932355s for fixHost
	I0914 18:08:04.546102   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 4m34.66297244s
	W0914 18:08:04.546122   62207 start.go:714] error starting host: provision: host is not running
	W0914 18:08:04.546220   62207 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0914 18:08:04.546231   62207 start.go:729] Will try again in 5 seconds ...
	I0914 18:08:05.812076   62554 main.go:141] libmachine: (embed-certs-044534) Waiting to get IP...
	I0914 18:08:05.812955   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:05.813302   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:05.813380   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:05.813279   63779 retry.go:31] will retry after 298.8389ms: waiting for machine to come up
	I0914 18:08:06.114130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.114575   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.114604   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.114530   63779 retry.go:31] will retry after 359.694721ms: waiting for machine to come up
	I0914 18:08:06.476183   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.476801   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.476828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.476745   63779 retry.go:31] will retry after 425.650219ms: waiting for machine to come up
	I0914 18:08:06.904358   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:06.904794   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:06.904816   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:06.904749   63779 retry.go:31] will retry after 433.157325ms: waiting for machine to come up
	I0914 18:08:07.339139   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.339578   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.339602   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.339512   63779 retry.go:31] will retry after 547.817102ms: waiting for machine to come up
	I0914 18:08:07.889390   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:07.889888   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:07.889993   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:07.889820   63779 retry.go:31] will retry after 603.749753ms: waiting for machine to come up
	I0914 18:08:08.495673   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:08.496047   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:08.496076   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:08.495995   63779 retry.go:31] will retry after 831.027535ms: waiting for machine to come up
	I0914 18:08:09.329209   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:09.329622   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:09.329643   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:09.329591   63779 retry.go:31] will retry after 1.429850518s: waiting for machine to come up
	I0914 18:08:09.548738   62207 start.go:360] acquireMachinesLock for no-preload-168587: {Name:mk26748ac63472c3f8b6f3848c12e76160c2970c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 18:08:10.761510   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:10.761884   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:10.761915   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:10.761839   63779 retry.go:31] will retry after 1.146619754s: waiting for machine to come up
	I0914 18:08:11.910130   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:11.910542   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:11.910568   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:11.910500   63779 retry.go:31] will retry after 1.582382319s: waiting for machine to come up
	I0914 18:08:13.495352   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:13.495852   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:13.495872   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:13.495808   63779 retry.go:31] will retry after 2.117717335s: waiting for machine to come up
	I0914 18:08:15.615461   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:15.615896   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:15.615918   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:15.615846   63779 retry.go:31] will retry after 3.071486865s: waiting for machine to come up
	I0914 18:08:18.691109   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:18.691572   62554 main.go:141] libmachine: (embed-certs-044534) DBG | unable to find current IP address of domain embed-certs-044534 in network mk-embed-certs-044534
	I0914 18:08:18.691605   62554 main.go:141] libmachine: (embed-certs-044534) DBG | I0914 18:08:18.691513   63779 retry.go:31] will retry after 4.250544955s: waiting for machine to come up
	I0914 18:08:24.143036   62996 start.go:364] duration metric: took 3m18.692107902s to acquireMachinesLock for "old-k8s-version-556121"
	I0914 18:08:24.143089   62996 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:24.143094   62996 fix.go:54] fixHost starting: 
	I0914 18:08:24.143474   62996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:24.143527   62996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:24.160421   62996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0914 18:08:24.160864   62996 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:24.161467   62996 main.go:141] libmachine: Using API Version  1
	I0914 18:08:24.161495   62996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:24.161913   62996 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:24.162137   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:24.162322   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetState
	I0914 18:08:24.163974   62996 fix.go:112] recreateIfNeeded on old-k8s-version-556121: state=Stopped err=<nil>
	I0914 18:08:24.164020   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	W0914 18:08:24.164197   62996 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:24.166624   62996 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-556121" ...
	I0914 18:08:22.946247   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946662   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has current primary IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.946687   62554 main.go:141] libmachine: (embed-certs-044534) Found IP for machine: 192.168.50.126
	I0914 18:08:22.946700   62554 main.go:141] libmachine: (embed-certs-044534) Reserving static IP address...
	I0914 18:08:22.947052   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.947068   62554 main.go:141] libmachine: (embed-certs-044534) Reserved static IP address: 192.168.50.126
	I0914 18:08:22.947080   62554 main.go:141] libmachine: (embed-certs-044534) DBG | skip adding static IP to network mk-embed-certs-044534 - found existing host DHCP lease matching {name: "embed-certs-044534", mac: "52:54:00:f7:d3:8e", ip: "192.168.50.126"}
	I0914 18:08:22.947093   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Getting to WaitForSSH function...
	I0914 18:08:22.947108   62554 main.go:141] libmachine: (embed-certs-044534) Waiting for SSH to be available...
	I0914 18:08:22.949354   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949623   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:22.949645   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:22.949798   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH client type: external
	I0914 18:08:22.949822   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa (-rw-------)
	I0914 18:08:22.949886   62554 main.go:141] libmachine: (embed-certs-044534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:22.949911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | About to run SSH command:
	I0914 18:08:22.949926   62554 main.go:141] libmachine: (embed-certs-044534) DBG | exit 0
	I0914 18:08:23.074248   62554 main.go:141] libmachine: (embed-certs-044534) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:23.074559   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetConfigRaw
	I0914 18:08:23.075190   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.077682   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078007   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.078040   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.078309   62554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/config.json ...
	I0914 18:08:23.078494   62554 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:23.078510   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.078723   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.081444   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.081846   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.081891   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.082026   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.082209   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082398   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.082573   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.082739   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.082961   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.082984   62554 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:23.186143   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:23.186193   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186424   62554 buildroot.go:166] provisioning hostname "embed-certs-044534"
	I0914 18:08:23.186447   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.186622   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.189085   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189453   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.189482   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.189615   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.189802   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190032   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.190168   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.190422   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.190587   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.190601   62554 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-044534 && echo "embed-certs-044534" | sudo tee /etc/hostname
	I0914 18:08:23.307484   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-044534
	
	I0914 18:08:23.307512   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.310220   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.310664   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.310764   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.310969   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311206   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.311438   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.311594   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.311802   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.311820   62554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-044534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-044534/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-044534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:23.422574   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:23.422603   62554 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:23.422623   62554 buildroot.go:174] setting up certificates
	I0914 18:08:23.422634   62554 provision.go:84] configureAuth start
	I0914 18:08:23.422643   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetMachineName
	I0914 18:08:23.422905   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:23.426201   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426557   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.426584   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.426745   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.428607   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.428985   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.429016   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.429138   62554 provision.go:143] copyHostCerts
	I0914 18:08:23.429198   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:23.429211   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:23.429295   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:23.429437   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:23.429452   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:23.429498   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:23.429592   62554 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:23.429600   62554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:23.429626   62554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:23.429680   62554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.embed-certs-044534 san=[127.0.0.1 192.168.50.126 embed-certs-044534 localhost minikube]
	I0914 18:08:23.538590   62554 provision.go:177] copyRemoteCerts
	I0914 18:08:23.538662   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:23.538689   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.541366   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541723   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.541746   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.541938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.542120   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.542303   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.542413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.623698   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:23.647378   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 18:08:23.671327   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:08:23.694570   62554 provision.go:87] duration metric: took 271.923979ms to configureAuth
	I0914 18:08:23.694598   62554 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:23.694779   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:08:23.694868   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.697467   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.697828   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.697862   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.698042   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.698249   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698421   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.698571   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.698692   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:23.698945   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:23.698963   62554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:23.911661   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:23.911697   62554 machine.go:96] duration metric: took 833.189197ms to provisionDockerMachine
	I0914 18:08:23.911712   62554 start.go:293] postStartSetup for "embed-certs-044534" (driver="kvm2")
	I0914 18:08:23.911726   62554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:23.911751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:23.912134   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:23.912169   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:23.914579   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.914974   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:23.915011   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:23.915121   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:23.915322   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:23.915582   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:23.915710   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:23.996910   62554 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:24.000900   62554 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:24.000926   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:24.000998   62554 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:24.001099   62554 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:24.001222   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:24.010496   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:24.033377   62554 start.go:296] duration metric: took 121.65145ms for postStartSetup
	I0914 18:08:24.033414   62554 fix.go:56] duration metric: took 19.487140172s for fixHost
	I0914 18:08:24.033434   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.036188   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036494   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.036524   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.036672   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.036886   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037082   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.037216   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.037375   62554 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:24.037542   62554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.126 22 <nil> <nil>}
	I0914 18:08:24.037554   62554 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:24.142822   62554 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337304.118879777
	
	I0914 18:08:24.142851   62554 fix.go:216] guest clock: 1726337304.118879777
	I0914 18:08:24.142862   62554 fix.go:229] Guest: 2024-09-14 18:08:24.118879777 +0000 UTC Remote: 2024-09-14 18:08:24.03341777 +0000 UTC m=+259.160200473 (delta=85.462007ms)
	I0914 18:08:24.142936   62554 fix.go:200] guest clock delta is within tolerance: 85.462007ms
	I0914 18:08:24.142960   62554 start.go:83] releasing machines lock for "embed-certs-044534", held for 19.596720856s
	I0914 18:08:24.142992   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.143262   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:24.146122   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146501   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.146537   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.146711   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147204   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147430   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:08:24.147532   62554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:24.147589   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.147813   62554 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:24.147839   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:08:24.150691   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.150736   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151012   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151056   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151149   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:24.151179   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:24.151431   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151468   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:08:24.151586   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151751   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:08:24.151772   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151938   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:08:24.151944   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.152034   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:08:24.256821   62554 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:24.263249   62554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:24.411996   62554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:24.418685   62554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:24.418759   62554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:24.434541   62554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:24.434569   62554 start.go:495] detecting cgroup driver to use...
	I0914 18:08:24.434655   62554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:24.452550   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:24.467548   62554 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:24.467602   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:24.482556   62554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:24.497198   62554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:24.625300   62554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:24.805163   62554 docker.go:233] disabling docker service ...
	I0914 18:08:24.805248   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:24.821164   62554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:24.834886   62554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:24.167885   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .Start
	I0914 18:08:24.168096   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring networks are active...
	I0914 18:08:24.169086   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network default is active
	I0914 18:08:24.169493   62996 main.go:141] libmachine: (old-k8s-version-556121) Ensuring network mk-old-k8s-version-556121 is active
	I0914 18:08:24.170025   62996 main.go:141] libmachine: (old-k8s-version-556121) Getting domain xml...
	I0914 18:08:24.170619   62996 main.go:141] libmachine: (old-k8s-version-556121) Creating domain...
	I0914 18:08:24.963694   62554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:25.081720   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:25.097176   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:25.116611   62554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:08:25.116677   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.129500   62554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:25.129586   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.140281   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.150925   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.166139   62554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:25.177340   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.187662   62554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.207019   62554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:25.217207   62554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:25.226988   62554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:25.227065   62554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:25.248357   62554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:25.258467   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:25.375359   62554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:25.470389   62554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:25.470470   62554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:25.475526   62554 start.go:563] Will wait 60s for crictl version
	I0914 18:08:25.475589   62554 ssh_runner.go:195] Run: which crictl
	I0914 18:08:25.479131   62554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:25.530371   62554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:25.530461   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.557035   62554 ssh_runner.go:195] Run: crio --version
	I0914 18:08:25.586883   62554 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:08:25.588117   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetIP
	I0914 18:08:25.591212   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591600   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:08:25.591628   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:08:25.591816   62554 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:25.595706   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:25.608009   62554 kubeadm.go:883] updating cluster {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:25.608141   62554 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:08:25.608194   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:25.643422   62554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:08:25.643515   62554 ssh_runner.go:195] Run: which lz4
	I0914 18:08:25.647471   62554 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:25.651573   62554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:25.651607   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:08:26.985357   62554 crio.go:462] duration metric: took 1.337911722s to copy over tarball
	I0914 18:08:26.985437   62554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:29.111492   62554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126022567s)
	I0914 18:08:29.111524   62554 crio.go:469] duration metric: took 2.12613646s to extract the tarball
	I0914 18:08:29.111533   62554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:29.148426   62554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:29.190595   62554 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:08:29.190620   62554 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:08:29.190628   62554 kubeadm.go:934] updating node { 192.168.50.126 8443 v1.31.1 crio true true} ...
	I0914 18:08:29.190751   62554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-044534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:29.190823   62554 ssh_runner.go:195] Run: crio config
	I0914 18:08:29.234785   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:29.234808   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:29.234818   62554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:29.234871   62554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.126 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-044534 NodeName:embed-certs-044534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:08:29.234996   62554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-044534"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:29.235054   62554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:08:29.244554   62554 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:29.244631   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:29.253622   62554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0914 18:08:29.270046   62554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:29.285751   62554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0914 18:08:29.303567   62554 ssh_runner.go:195] Run: grep 192.168.50.126	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:29.307335   62554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:29.319510   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:29.442649   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:29.459657   62554 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534 for IP: 192.168.50.126
	I0914 18:08:29.459687   62554 certs.go:194] generating shared ca certs ...
	I0914 18:08:29.459709   62554 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:29.459908   62554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:29.459976   62554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:29.459995   62554 certs.go:256] generating profile certs ...
	I0914 18:08:29.460166   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/client.key
	I0914 18:08:29.460247   62554 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key.15c978c5
	I0914 18:08:29.460301   62554 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key
	I0914 18:08:29.460447   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:29.460491   62554 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:29.460505   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:29.460537   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:29.460581   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:29.460605   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:29.460649   62554 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:29.461415   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:29.501260   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:29.531940   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:29.577959   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:29.604067   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0914 18:08:29.635335   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 18:08:29.658841   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:29.684149   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/embed-certs-044534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:08:29.709354   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:29.733812   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:29.758427   62554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:29.783599   62554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:29.802188   62554 ssh_runner.go:195] Run: openssl version
	I0914 18:08:29.808277   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:29.821167   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825911   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.825978   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:29.832160   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:29.844395   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:29.856943   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861671   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.861730   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:29.867506   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:29.878004   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:29.890322   62554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.894985   62554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.895053   62554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:29.900837   62554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:25.409780   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting to get IP...
	I0914 18:08:25.410880   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.411287   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.411359   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.411268   63916 retry.go:31] will retry after 190.165859ms: waiting for machine to come up
	I0914 18:08:25.602661   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.603210   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.603235   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.603161   63916 retry.go:31] will retry after 274.368109ms: waiting for machine to come up
	I0914 18:08:25.879976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:25.880476   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:25.880509   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:25.880412   63916 retry.go:31] will retry after 476.865698ms: waiting for machine to come up
	I0914 18:08:26.359279   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.359815   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.359845   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.359775   63916 retry.go:31] will retry after 474.163339ms: waiting for machine to come up
	I0914 18:08:26.835268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:26.835953   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:26.835983   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:26.835914   63916 retry.go:31] will retry after 567.661702ms: waiting for machine to come up
	I0914 18:08:27.404884   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:27.405341   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:27.405370   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:27.405297   63916 retry.go:31] will retry after 852.429203ms: waiting for machine to come up
	I0914 18:08:28.259542   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:28.260217   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:28.260243   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:28.260154   63916 retry.go:31] will retry after 1.085703288s: waiting for machine to come up
	I0914 18:08:29.347849   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:29.348268   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:29.348289   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:29.348235   63916 retry.go:31] will retry after 1.387665735s: waiting for machine to come up
	I0914 18:08:29.911102   62554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:29.915546   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:29.921470   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:29.927238   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:29.933122   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:29.938829   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:29.944811   62554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:29.950679   62554 kubeadm.go:392] StartCluster: {Name:embed-certs-044534 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-044534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:29.950762   62554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:29.950866   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:29.987553   62554 cri.go:89] found id: ""
	I0914 18:08:29.987626   62554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:29.998690   62554 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:29.998713   62554 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:29.998765   62554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:30.009411   62554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:30.010804   62554 kubeconfig.go:125] found "embed-certs-044534" server: "https://192.168.50.126:8443"
	I0914 18:08:30.013635   62554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:30.023903   62554 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.126
	I0914 18:08:30.023937   62554 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:30.023951   62554 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:30.024017   62554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:30.067767   62554 cri.go:89] found id: ""
	I0914 18:08:30.067842   62554 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:30.087326   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:30.098162   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:30.098180   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:30.098218   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:30.108239   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:30.108296   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:30.118913   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:30.129091   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:30.129172   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:30.139658   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.148838   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:30.148923   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:30.158386   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:30.167282   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:30.167354   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:30.176443   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:30.185476   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:30.310603   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.243123   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.457657   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.531992   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:31.625580   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:31.625683   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.125744   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:32.626056   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.126817   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:33.146478   62554 api_server.go:72] duration metric: took 1.520896575s to wait for apiserver process to appear ...
	I0914 18:08:33.146517   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:08:33.146543   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:33.147106   62554 api_server.go:269] stopped: https://192.168.50.126:8443/healthz: Get "https://192.168.50.126:8443/healthz": dial tcp 192.168.50.126:8443: connect: connection refused
	I0914 18:08:33.646672   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:30.737338   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:30.737792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:30.737844   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:30.737738   63916 retry.go:31] will retry after 1.803773185s: waiting for machine to come up
	I0914 18:08:32.543684   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:32.544156   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:32.544182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:32.544107   63916 retry.go:31] will retry after 1.828120666s: waiting for machine to come up
	I0914 18:08:34.373701   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:34.374182   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:34.374211   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:34.374120   63916 retry.go:31] will retry after 2.720782735s: waiting for machine to come up
	I0914 18:08:35.687169   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.687200   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:35.687221   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:35.737352   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:08:35.737410   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:08:36.146777   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.151156   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.151185   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:36.647380   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:36.655444   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:08:36.655477   62554 api_server.go:103] status: https://192.168.50.126:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:08:37.146971   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:08:37.151233   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:08:37.160642   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:08:37.160671   62554 api_server.go:131] duration metric: took 4.014146932s to wait for apiserver health ...
	I0914 18:08:37.160679   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:08:37.160686   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:37.162836   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:08:37.164378   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:08:37.183377   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:08:37.210701   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:08:37.222258   62554 system_pods.go:59] 8 kube-system pods found
	I0914 18:08:37.222304   62554 system_pods.go:61] "coredns-7c65d6cfc9-59dm5" [55e67ff8-cf54-41fc-af46-160085787f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:08:37.222316   62554 system_pods.go:61] "etcd-embed-certs-044534" [932ca8e3-a777-4bb3-bdc2-6c1f1d293d4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:08:37.222331   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [f71e6720-c32c-426f-8620-b56eadf5e33b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:08:37.222351   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [b93c261f-303f-43bb-8b33-4f97dc287809] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:08:37.222359   62554 system_pods.go:61] "kube-proxy-nkdth" [3762b613-c50f-4ba9-af52-371b139f9b6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:08:37.222368   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [65da2ca2-0405-4726-a2dc-dd13519c336a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:08:37.222377   62554 system_pods.go:61] "metrics-server-6867b74b74-stwfz" [ccc73057-4710-4e41-b643-d793d9b01175] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:08:37.222393   62554 system_pods.go:61] "storage-provisioner" [660fd3e3-ce57-4275-9fe1-bcceba75d8a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:08:37.222405   62554 system_pods.go:74] duration metric: took 11.676128ms to wait for pod list to return data ...
	I0914 18:08:37.222420   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:08:37.227047   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:08:37.227087   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:08:37.227104   62554 node_conditions.go:105] duration metric: took 4.678826ms to run NodePressure ...
	I0914 18:08:37.227124   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:37.510868   62554 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515839   62554 kubeadm.go:739] kubelet initialised
	I0914 18:08:37.515863   62554 kubeadm.go:740] duration metric: took 4.967389ms waiting for restarted kubelet to initialise ...
	I0914 18:08:37.515871   62554 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:08:37.520412   62554 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:39.528469   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:37.097976   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:37.098462   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:37.098499   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:37.098402   63916 retry.go:31] will retry after 2.748765758s: waiting for machine to come up
	I0914 18:08:39.849058   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:39.849634   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | unable to find current IP address of domain old-k8s-version-556121 in network mk-old-k8s-version-556121
	I0914 18:08:39.849665   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | I0914 18:08:39.849559   63916 retry.go:31] will retry after 3.687679512s: waiting for machine to come up
	I0914 18:08:42.028017   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:44.526502   62554 pod_ready.go:103] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:45.103061   63448 start.go:364] duration metric: took 2m4.701591278s to acquireMachinesLock for "default-k8s-diff-port-243449"
	I0914 18:08:45.103116   63448 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:08:45.103124   63448 fix.go:54] fixHost starting: 
	I0914 18:08:45.103555   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:08:45.103626   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:08:45.120496   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0914 18:08:45.121098   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:08:45.122023   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:08:45.122050   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:08:45.122440   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:08:45.122631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:08:45.122792   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:08:45.124473   63448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-243449: state=Stopped err=<nil>
	I0914 18:08:45.124500   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	W0914 18:08:45.124633   63448 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:08:45.126255   63448 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-243449" ...
	I0914 18:08:45.127296   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Start
	I0914 18:08:45.127469   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring networks are active...
	I0914 18:08:45.128415   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network default is active
	I0914 18:08:45.128823   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Ensuring network mk-default-k8s-diff-port-243449 is active
	I0914 18:08:45.129257   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Getting domain xml...
	I0914 18:08:45.130055   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Creating domain...
	I0914 18:08:43.541607   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542188   62996 main.go:141] libmachine: (old-k8s-version-556121) Found IP for machine: 192.168.83.80
	I0914 18:08:43.542220   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has current primary IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.542230   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserving static IP address...
	I0914 18:08:43.542686   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.542711   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | skip adding static IP to network mk-old-k8s-version-556121 - found existing host DHCP lease matching {name: "old-k8s-version-556121", mac: "52:54:00:76:25:ab", ip: "192.168.83.80"}
	I0914 18:08:43.542728   62996 main.go:141] libmachine: (old-k8s-version-556121) Reserved static IP address: 192.168.83.80
	I0914 18:08:43.542748   62996 main.go:141] libmachine: (old-k8s-version-556121) Waiting for SSH to be available...
	I0914 18:08:43.542770   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Getting to WaitForSSH function...
	I0914 18:08:43.545361   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545798   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.545828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.545984   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH client type: external
	I0914 18:08:43.546021   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa (-rw-------)
	I0914 18:08:43.546067   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:08:43.546091   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | About to run SSH command:
	I0914 18:08:43.546109   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | exit 0
	I0914 18:08:43.686605   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | SSH cmd err, output: <nil>: 
	I0914 18:08:43.687033   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetConfigRaw
	I0914 18:08:43.750102   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:43.753303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.753653   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.753696   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.754107   62996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/config.json ...
	I0914 18:08:43.802426   62996 machine.go:93] provisionDockerMachine start ...
	I0914 18:08:43.802497   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:43.802858   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.805944   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806303   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.806346   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.806722   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.806951   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807130   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.807298   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.807469   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.807687   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.807700   62996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:08:43.906427   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:08:43.906467   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906725   62996 buildroot.go:166] provisioning hostname "old-k8s-version-556121"
	I0914 18:08:43.906787   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:43.906978   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:43.909891   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910262   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:43.910295   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:43.910545   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:43.910771   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.910908   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:43.911062   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:43.911221   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:43.911418   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:43.911430   62996 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556121 && echo "old-k8s-version-556121" | sudo tee /etc/hostname
	I0914 18:08:44.028748   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556121
	
	I0914 18:08:44.028774   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.031512   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.031824   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.031848   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.032009   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.032145   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032311   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.032445   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.032583   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.032792   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.032809   62996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556121/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:08:44.140041   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:08:44.140068   62996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:08:44.140094   62996 buildroot.go:174] setting up certificates
	I0914 18:08:44.140103   62996 provision.go:84] configureAuth start
	I0914 18:08:44.140111   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetMachineName
	I0914 18:08:44.140439   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:44.143050   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143454   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.143492   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.143678   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.146487   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.146947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.146971   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.147147   62996 provision.go:143] copyHostCerts
	I0914 18:08:44.147213   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:08:44.147224   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:08:44.147287   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:08:44.147440   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:08:44.147450   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:08:44.147475   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:08:44.147530   62996 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:08:44.147538   62996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:08:44.147558   62996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:08:44.147613   62996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-556121 san=[127.0.0.1 192.168.83.80 localhost minikube old-k8s-version-556121]
	I0914 18:08:44.500305   62996 provision.go:177] copyRemoteCerts
	I0914 18:08:44.500395   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:08:44.500430   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.503376   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503790   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.503828   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.503972   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.504194   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.504352   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.504531   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.584362   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:08:44.607734   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:08:44.630267   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:08:44.653997   62996 provision.go:87] duration metric: took 513.857804ms to configureAuth
	I0914 18:08:44.654029   62996 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:08:44.654259   62996 config.go:182] Loaded profile config "old-k8s-version-556121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 18:08:44.654338   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.657020   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657416   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.657442   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.657676   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.657884   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658047   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.658228   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.658382   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:44.658584   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:44.658602   62996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:08:44.877074   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:08:44.877103   62996 machine.go:96] duration metric: took 1.074648772s to provisionDockerMachine
	I0914 18:08:44.877117   62996 start.go:293] postStartSetup for "old-k8s-version-556121" (driver="kvm2")
	I0914 18:08:44.877128   62996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:08:44.877155   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:44.877491   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:08:44.877522   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:44.880792   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881167   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:44.881197   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:44.881472   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:44.881693   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:44.881853   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:44.881984   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:44.961211   62996 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:08:44.965472   62996 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:08:44.965507   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:08:44.965583   62996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:08:44.965671   62996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:08:44.965765   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:08:44.975476   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:45.000248   62996 start.go:296] duration metric: took 123.115178ms for postStartSetup
	I0914 18:08:45.000299   62996 fix.go:56] duration metric: took 20.85719914s for fixHost
	I0914 18:08:45.000326   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.002894   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003216   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.003247   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.003407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.003585   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003749   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.003880   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.004041   62996 main.go:141] libmachine: Using SSH client type: native
	I0914 18:08:45.004211   62996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.83.80 22 <nil> <nil>}
	I0914 18:08:45.004221   62996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:08:45.102905   62996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337325.064071007
	
	I0914 18:08:45.102933   62996 fix.go:216] guest clock: 1726337325.064071007
	I0914 18:08:45.102944   62996 fix.go:229] Guest: 2024-09-14 18:08:45.064071007 +0000 UTC Remote: 2024-09-14 18:08:45.000305051 +0000 UTC m=+219.697616364 (delta=63.765956ms)
	I0914 18:08:45.102967   62996 fix.go:200] guest clock delta is within tolerance: 63.765956ms
	I0914 18:08:45.102973   62996 start.go:83] releasing machines lock for "old-k8s-version-556121", held for 20.959903428s
	I0914 18:08:45.102999   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.103277   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:45.105995   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106435   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.106463   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.106684   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107224   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107415   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .DriverName
	I0914 18:08:45.107506   62996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:08:45.107556   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.107675   62996 ssh_runner.go:195] Run: cat /version.json
	I0914 18:08:45.107699   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHHostname
	I0914 18:08:45.110528   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110558   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.110917   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110947   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:45.110969   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111062   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:45.111157   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111388   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111407   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHPort
	I0914 18:08:45.111564   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111582   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHKeyPath
	I0914 18:08:45.111716   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetSSHUsername
	I0914 18:08:45.111758   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.111829   62996 sshutil.go:53] new ssh client: &{IP:192.168.83.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/old-k8s-version-556121/id_rsa Username:docker}
	I0914 18:08:45.187315   62996 ssh_runner.go:195] Run: systemctl --version
	I0914 18:08:45.222737   62996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:08:45.372449   62996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:08:45.378337   62996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:08:45.378395   62996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:08:45.396041   62996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:08:45.396072   62996 start.go:495] detecting cgroup driver to use...
	I0914 18:08:45.396148   62996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:08:45.413530   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:08:45.428876   62996 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:08:45.428950   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:08:45.444066   62996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:08:45.458976   62996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:08:45.591808   62996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:08:45.737299   62996 docker.go:233] disabling docker service ...
	I0914 18:08:45.737382   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:08:45.752471   62996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:08:45.770192   62996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:08:45.923691   62996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:08:46.054919   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:08:46.068923   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:08:46.089366   62996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0914 18:08:46.089441   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.100025   62996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:08:46.100100   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.111015   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.123133   62996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:08:46.135582   62996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:08:46.146937   62996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:08:46.158542   62996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:08:46.158618   62996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:08:46.178181   62996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:08:46.188291   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:46.316875   62996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:08:46.407391   62996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:08:46.407470   62996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:08:46.412103   62996 start.go:563] Will wait 60s for crictl version
	I0914 18:08:46.412164   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:46.415903   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:08:46.457124   62996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:08:46.457224   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.485380   62996 ssh_runner.go:195] Run: crio --version
	I0914 18:08:46.513525   62996 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0914 18:08:46.027201   62554 pod_ready.go:93] pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:46.027223   62554 pod_ready.go:82] duration metric: took 8.506784658s for pod "coredns-7c65d6cfc9-59dm5" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.027232   62554 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043468   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.043499   62554 pod_ready.go:82] duration metric: took 1.016259668s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.043513   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050825   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.050853   62554 pod_ready.go:82] duration metric: took 7.332421ms for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.050869   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561389   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.561419   62554 pod_ready.go:82] duration metric: took 510.541663ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.561434   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568265   62554 pod_ready.go:93] pod "kube-proxy-nkdth" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:47.568298   62554 pod_ready.go:82] duration metric: took 6.854878ms for pod "kube-proxy-nkdth" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:47.568312   62554 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575898   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:08:48.575924   62554 pod_ready.go:82] duration metric: took 1.00760412s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:48.575934   62554 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	I0914 18:08:46.464001   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting to get IP...
	I0914 18:08:46.465004   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465408   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.465512   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.465391   64066 retry.go:31] will retry after 283.185405ms: waiting for machine to come up
	I0914 18:08:46.751155   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751669   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:46.751697   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:46.751622   64066 retry.go:31] will retry after 307.273139ms: waiting for machine to come up
	I0914 18:08:47.060812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061855   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.061889   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.061749   64066 retry.go:31] will retry after 420.077307ms: waiting for machine to come up
	I0914 18:08:47.483188   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483611   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:47.483656   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:47.483567   64066 retry.go:31] will retry after 562.15435ms: waiting for machine to come up
	I0914 18:08:48.047428   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.047971   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.047867   64066 retry.go:31] will retry after 744.523152ms: waiting for machine to come up
	I0914 18:08:48.793959   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794449   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:48.794492   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:48.794393   64066 retry.go:31] will retry after 813.631617ms: waiting for machine to come up
	I0914 18:08:49.609483   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:49.609969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:49.609904   64066 retry.go:31] will retry after 941.244861ms: waiting for machine to come up
	I0914 18:08:46.515031   62996 main.go:141] libmachine: (old-k8s-version-556121) Calling .GetIP
	I0914 18:08:46.517851   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518301   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:25:ab", ip: ""} in network mk-old-k8s-version-556121: {Iface:virbr1 ExpiryTime:2024-09-14 19:08:35 +0000 UTC Type:0 Mac:52:54:00:76:25:ab Iaid: IPaddr:192.168.83.80 Prefix:24 Hostname:old-k8s-version-556121 Clientid:01:52:54:00:76:25:ab}
	I0914 18:08:46.518329   62996 main.go:141] libmachine: (old-k8s-version-556121) DBG | domain old-k8s-version-556121 has defined IP address 192.168.83.80 and MAC address 52:54:00:76:25:ab in network mk-old-k8s-version-556121
	I0914 18:08:46.518560   62996 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0914 18:08:46.522559   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:46.536122   62996 kubeadm.go:883] updating cluster {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:08:46.536233   62996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 18:08:46.536272   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:46.582326   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:46.582385   62996 ssh_runner.go:195] Run: which lz4
	I0914 18:08:46.586381   62996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:08:46.590252   62996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:08:46.590302   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0914 18:08:48.262036   62996 crio.go:462] duration metric: took 1.6757003s to copy over tarball
	I0914 18:08:48.262113   62996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:08:50.583860   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:52.826559   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:50.553210   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553735   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:50.553764   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:50.553671   64066 retry.go:31] will retry after 1.107692241s: waiting for machine to come up
	I0914 18:08:51.663218   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663723   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:51.663753   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:51.663681   64066 retry.go:31] will retry after 1.357435642s: waiting for machine to come up
	I0914 18:08:53.022246   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022695   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:53.022726   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:53.022628   64066 retry.go:31] will retry after 2.045434586s: waiting for machine to come up
	I0914 18:08:55.070946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071420   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:55.071450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:55.071362   64066 retry.go:31] will retry after 2.084823885s: waiting for machine to come up
	I0914 18:08:51.259991   62996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.997823346s)
	I0914 18:08:51.260027   62996 crio.go:469] duration metric: took 2.997963105s to extract the tarball
	I0914 18:08:51.260037   62996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:08:51.303210   62996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:08:51.337655   62996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0914 18:08:51.337685   62996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:08:51.337793   62996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.337910   62996 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0914 18:08:51.337941   62996 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.337950   62996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.337800   62996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.337803   62996 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.337791   62996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.337823   62996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339846   62996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.339855   62996 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0914 18:08:51.339875   62996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.339865   62996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:51.339901   62996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.339935   62996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.339958   62996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.339949   62996 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.528665   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.570817   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.575861   62996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0914 18:08:51.575917   62996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.575968   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.576612   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0914 18:08:51.577804   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.578496   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.581833   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.613046   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.724554   62996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0914 18:08:51.724608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.724611   62996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.724713   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.757578   62996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0914 18:08:51.757628   62996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0914 18:08:51.757677   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772578   62996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0914 18:08:51.772597   62996 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0914 18:08:51.772629   62996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0914 18:08:51.772634   62996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.772659   62996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.772690   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772704   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.772633   62996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.772748   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.790305   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.790442   62996 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0914 18:08:51.790492   62996 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.790534   62996 ssh_runner.go:195] Run: which crictl
	I0914 18:08:51.799286   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:51.799338   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.799395   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.799446   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.799486   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.937830   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:51.937839   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:51.937918   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:51.940605   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:51.940670   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:51.940723   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:51.962218   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0914 18:08:52.063106   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0914 18:08:52.112424   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.112498   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0914 18:08:52.112521   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0914 18:08:52.112602   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0914 18:08:52.112608   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0914 18:08:52.112737   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0914 18:08:52.149523   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0914 18:08:52.230998   62996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0914 18:08:52.231015   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0914 18:08:52.234715   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0914 18:08:52.234737   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0914 18:08:52.234813   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0914 18:08:52.268145   62996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0914 18:08:52.500688   62996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:08:52.641559   62996 cache_images.go:92] duration metric: took 1.303851383s to LoadCachedImages
	W0914 18:08:52.641671   62996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0914 18:08:52.641690   62996 kubeadm.go:934] updating node { 192.168.83.80 8443 v1.20.0 crio true true} ...
	I0914 18:08:52.641822   62996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-556121 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:08:52.641918   62996 ssh_runner.go:195] Run: crio config
	I0914 18:08:52.691852   62996 cni.go:84] Creating CNI manager for ""
	I0914 18:08:52.691878   62996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:08:52.691888   62996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:08:52.691906   62996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556121 NodeName:old-k8s-version-556121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:08:52.692037   62996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-556121"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:08:52.692122   62996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:08:52.701735   62996 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:08:52.701810   62996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:08:52.711224   62996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0914 18:08:52.728991   62996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:08:52.746689   62996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0914 18:08:52.765724   62996 ssh_runner.go:195] Run: grep 192.168.83.80	control-plane.minikube.internal$ /etc/hosts
	I0914 18:08:52.769968   62996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:08:52.782728   62996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:08:52.910650   62996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:08:52.927202   62996 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121 for IP: 192.168.83.80
	I0914 18:08:52.927226   62996 certs.go:194] generating shared ca certs ...
	I0914 18:08:52.927247   62996 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:52.927426   62996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:08:52.927478   62996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:08:52.927488   62996 certs.go:256] generating profile certs ...
	I0914 18:08:52.927584   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.key
	I0914 18:08:52.927642   62996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key.faf839ab
	I0914 18:08:52.927706   62996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key
	I0914 18:08:52.927873   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:08:52.927906   62996 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:08:52.927916   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:08:52.927938   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:08:52.927960   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:08:52.927982   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:08:52.928018   62996 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:08:52.928623   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:08:52.991610   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:08:53.017660   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:08:53.044552   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:08:53.073612   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:08:53.125813   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:08:53.157202   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:08:53.201480   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:08:53.226725   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:08:53.250793   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:08:53.275519   62996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:08:53.300545   62996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:08:53.317709   62996 ssh_runner.go:195] Run: openssl version
	I0914 18:08:53.323602   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:08:53.335011   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339838   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.339909   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:08:53.346100   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:08:53.359186   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:08:53.370507   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375153   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.375223   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:08:53.380939   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:08:53.392163   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:08:53.404356   62996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409052   62996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.409134   62996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:08:53.415280   62996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:08:53.426864   62996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:08:53.431690   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:08:53.437920   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:08:53.444244   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:08:53.450762   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:08:53.457107   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:08:53.463041   62996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:08:53.469401   62996 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-556121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:08:53.469509   62996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:08:53.469568   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.508602   62996 cri.go:89] found id: ""
	I0914 18:08:53.508668   62996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:08:53.518645   62996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:08:53.518666   62996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:08:53.518719   62996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:08:53.530459   62996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:08:53.531439   62996 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556121" does not appear in /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:08:53.532109   62996 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-8806/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556121" cluster setting kubeconfig missing "old-k8s-version-556121" context setting]
	I0914 18:08:53.532952   62996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:08:53.611765   62996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:08:53.622817   62996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.80
	I0914 18:08:53.622854   62996 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:08:53.622866   62996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:08:53.622919   62996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:08:53.659041   62996 cri.go:89] found id: ""
	I0914 18:08:53.659191   62996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:08:53.680543   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:08:53.693835   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:08:53.693854   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:08:53.693907   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:08:53.704221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:08:53.704300   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:08:53.713947   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:08:53.722981   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:08:53.723056   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:08:53.733059   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.742233   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:08:53.742305   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:08:53.752182   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:08:53.761890   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:08:53.761965   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:08:53.771448   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:08:53.781385   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:53.911483   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.084673   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.582709   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:59.583340   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:08:57.158301   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158679   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:57.158705   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:57.158640   64066 retry.go:31] will retry after 2.492994369s: waiting for machine to come up
	I0914 18:08:59.654137   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654550   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | unable to find current IP address of domain default-k8s-diff-port-243449 in network mk-default-k8s-diff-port-243449
	I0914 18:08:59.654585   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | I0914 18:08:59.654496   64066 retry.go:31] will retry after 3.393327124s: waiting for machine to come up
	I0914 18:08:55.409007   62996 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.497486764s)
	I0914 18:08:55.409041   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.640260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.761785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:08:55.873260   62996 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:08:55.873350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.373512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:56.874440   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.374464   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:57.874099   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.374014   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:58.873763   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.373845   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:08:59.873929   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.466791   62207 start.go:364] duration metric: took 54.917996405s to acquireMachinesLock for "no-preload-168587"
	I0914 18:09:04.466845   62207 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:09:04.466863   62207 fix.go:54] fixHost starting: 
	I0914 18:09:04.467265   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:04.467303   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:04.485295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0914 18:09:04.485680   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:04.486195   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:09:04.486221   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:04.486625   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:04.486825   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:04.486985   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:09:04.488546   62207 fix.go:112] recreateIfNeeded on no-preload-168587: state=Stopped err=<nil>
	I0914 18:09:04.488584   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	W0914 18:09:04.488749   62207 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:09:04.491638   62207 out.go:177] * Restarting existing kvm2 VM for "no-preload-168587" ...
	I0914 18:09:02.082684   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:04.582135   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:03.051442   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051882   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has current primary IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.051904   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Found IP for machine: 192.168.61.38
	I0914 18:09:03.051946   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserving static IP address...
	I0914 18:09:03.052245   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.052269   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | skip adding static IP to network mk-default-k8s-diff-port-243449 - found existing host DHCP lease matching {name: "default-k8s-diff-port-243449", mac: "52:54:00:6e:0b:a7", ip: "192.168.61.38"}
	I0914 18:09:03.052280   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Reserved static IP address: 192.168.61.38
	I0914 18:09:03.052289   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Waiting for SSH to be available...
	I0914 18:09:03.052306   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Getting to WaitForSSH function...
	I0914 18:09:03.054154   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054555   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.054596   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.054745   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH client type: external
	I0914 18:09:03.054777   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa (-rw-------)
	I0914 18:09:03.054813   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:03.054828   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | About to run SSH command:
	I0914 18:09:03.054841   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | exit 0
	I0914 18:09:03.178065   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:03.178576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetConfigRaw
	I0914 18:09:03.179198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.181829   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182220   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.182242   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.182541   63448 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/config.json ...
	I0914 18:09:03.182773   63448 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:03.182796   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:03.182992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.185635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186027   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.186056   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.186213   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.186416   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186602   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.186756   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.186882   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.187123   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.187137   63448 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:03.290288   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:03.290332   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290592   63448 buildroot.go:166] provisioning hostname "default-k8s-diff-port-243449"
	I0914 18:09:03.290620   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.290779   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.293587   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.293981   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.294012   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.294120   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.294307   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294450   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.294576   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.294708   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.294926   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.294944   63448 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-243449 && echo "default-k8s-diff-port-243449" | sudo tee /etc/hostname
	I0914 18:09:03.418148   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-243449
	
	I0914 18:09:03.418198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.421059   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421501   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.421536   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.421733   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.421925   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422075   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.422243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.422394   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:03.422581   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:03.422609   63448 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-243449' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-243449/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-243449' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:03.538785   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:03.538812   63448 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:03.538851   63448 buildroot.go:174] setting up certificates
	I0914 18:09:03.538866   63448 provision.go:84] configureAuth start
	I0914 18:09:03.538875   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetMachineName
	I0914 18:09:03.539230   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:03.541811   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542129   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.542183   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.542393   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.544635   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.544933   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.544969   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.545099   63448 provision.go:143] copyHostCerts
	I0914 18:09:03.545156   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:03.545167   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:03.545239   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:03.545362   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:03.545374   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:03.545410   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:03.545489   63448 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:03.545498   63448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:03.545533   63448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:03.545619   63448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-243449 san=[127.0.0.1 192.168.61.38 default-k8s-diff-port-243449 localhost minikube]
	I0914 18:09:03.858341   63448 provision.go:177] copyRemoteCerts
	I0914 18:09:03.858415   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:03.858453   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:03.861376   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:03.861687   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:03.861863   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:03.862062   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:03.862231   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:03.862359   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:03.944043   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:03.968175   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0914 18:09:03.990621   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 18:09:04.012163   63448 provision.go:87] duration metric: took 473.28607ms to configureAuth
	I0914 18:09:04.012190   63448 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:04.012364   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:04.012431   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.015021   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015505   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.015553   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.015693   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.015866   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016035   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.016157   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.016277   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.016479   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.016511   63448 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:04.234672   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:04.234697   63448 machine.go:96] duration metric: took 1.051909541s to provisionDockerMachine
	I0914 18:09:04.234710   63448 start.go:293] postStartSetup for "default-k8s-diff-port-243449" (driver="kvm2")
	I0914 18:09:04.234721   63448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:04.234766   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.235108   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:04.235139   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.237583   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.237964   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.237997   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.238237   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.238491   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.238667   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.238798   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.320785   63448 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:04.324837   63448 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:04.324863   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:04.324920   63448 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:04.325001   63448 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:04.325091   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:04.334235   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:04.357310   63448 start.go:296] duration metric: took 122.582935ms for postStartSetup
	I0914 18:09:04.357352   63448 fix.go:56] duration metric: took 19.25422843s for fixHost
	I0914 18:09:04.357373   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.360190   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360574   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.360601   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.360774   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.360973   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361163   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.361291   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.361479   63448 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:04.361658   63448 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.38 22 <nil> <nil>}
	I0914 18:09:04.361667   63448 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:04.466610   63448 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337344.436836920
	
	I0914 18:09:04.466654   63448 fix.go:216] guest clock: 1726337344.436836920
	I0914 18:09:04.466665   63448 fix.go:229] Guest: 2024-09-14 18:09:04.43683692 +0000 UTC Remote: 2024-09-14 18:09:04.357356624 +0000 UTC m=+144.091633354 (delta=79.480296ms)
	I0914 18:09:04.466691   63448 fix.go:200] guest clock delta is within tolerance: 79.480296ms
	I0914 18:09:04.466702   63448 start.go:83] releasing machines lock for "default-k8s-diff-port-243449", held for 19.363604776s
	I0914 18:09:04.466737   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.466992   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:04.469873   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470148   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.470198   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.470364   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.470877   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471098   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:04.471215   63448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:04.471270   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.471322   63448 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:04.471346   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:04.474023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474144   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474374   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474471   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474616   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:04.474631   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474637   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:04.474812   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.474816   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:04.474996   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:04.474987   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.475136   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:04.475274   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:04.587233   63448 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:04.593065   63448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:04.738721   63448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:04.745472   63448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:04.745539   63448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:04.765742   63448 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:04.765806   63448 start.go:495] detecting cgroup driver to use...
	I0914 18:09:04.765909   63448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:04.782234   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:04.797259   63448 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:04.797322   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:04.811794   63448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:04.826487   63448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:04.953417   63448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:05.102410   63448 docker.go:233] disabling docker service ...
	I0914 18:09:05.102491   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:05.117443   63448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:05.131147   63448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:05.278483   63448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:00.373968   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:00.874316   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.373792   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:01.873684   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.373524   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:02.874399   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.373728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:03.874267   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:04.873685   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.401195   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:05.415794   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:05.434594   63448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:05.434660   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.445566   63448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:05.445643   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.456690   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.468044   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.479719   63448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:05.491019   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.501739   63448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.520582   63448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:05.531469   63448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:05.541741   63448 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:05.541809   63448 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:05.561648   63448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:05.571882   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:05.706592   63448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:05.811522   63448 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:05.811599   63448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:05.816676   63448 start.go:563] Will wait 60s for crictl version
	I0914 18:09:05.816745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:09:05.820367   63448 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:05.862564   63448 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:05.862637   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.893106   63448 ssh_runner.go:195] Run: crio --version
	I0914 18:09:05.927136   63448 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:04.492847   62207 main.go:141] libmachine: (no-preload-168587) Calling .Start
	I0914 18:09:04.493070   62207 main.go:141] libmachine: (no-preload-168587) Ensuring networks are active...
	I0914 18:09:04.493844   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network default is active
	I0914 18:09:04.494193   62207 main.go:141] libmachine: (no-preload-168587) Ensuring network mk-no-preload-168587 is active
	I0914 18:09:04.494614   62207 main.go:141] libmachine: (no-preload-168587) Getting domain xml...
	I0914 18:09:04.495434   62207 main.go:141] libmachine: (no-preload-168587) Creating domain...
	I0914 18:09:05.801470   62207 main.go:141] libmachine: (no-preload-168587) Waiting to get IP...
	I0914 18:09:05.802621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:05.803215   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:05.803351   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:05.803229   64231 retry.go:31] will retry after 206.528002ms: waiting for machine to come up
	I0914 18:09:06.011556   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.012027   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.012063   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.011977   64231 retry.go:31] will retry after 252.283679ms: waiting for machine to come up
	I0914 18:09:06.266621   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.267145   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.267178   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.267093   64231 retry.go:31] will retry after 376.426781ms: waiting for machine to come up
	I0914 18:09:06.644639   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:06.645212   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:06.645245   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:06.645161   64231 retry.go:31] will retry after 518.904946ms: waiting for machine to come up
	I0914 18:09:06.584604   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:09.085179   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:05.928171   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetIP
	I0914 18:09:05.931131   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931584   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:05.931662   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:05.931826   63448 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:05.935729   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:05.947741   63448 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:05.947872   63448 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:05.947935   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:05.984371   63448 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:05.984473   63448 ssh_runner.go:195] Run: which lz4
	I0914 18:09:05.988311   63448 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0914 18:09:05.992088   63448 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0914 18:09:05.992123   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0914 18:09:07.311157   63448 crio.go:462] duration metric: took 1.322885925s to copy over tarball
	I0914 18:09:07.311297   63448 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0914 18:09:09.472639   63448 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.161311106s)
	I0914 18:09:09.472663   63448 crio.go:469] duration metric: took 2.161473132s to extract the tarball
	I0914 18:09:09.472670   63448 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0914 18:09:09.508740   63448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:09.554508   63448 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 18:09:09.554533   63448 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:09:09.554548   63448 kubeadm.go:934] updating node { 192.168.61.38 8444 v1.31.1 crio true true} ...
	I0914 18:09:09.554657   63448 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-243449 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:09.554722   63448 ssh_runner.go:195] Run: crio config
	I0914 18:09:09.603693   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:09.603715   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:09.603727   63448 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:09.603745   63448 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.38 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-243449 NodeName:default-k8s-diff-port-243449 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:09.603879   63448 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.38
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-243449"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:09.603935   63448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:09.613786   63448 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:09.613863   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:09.623172   63448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0914 18:09:09.641437   63448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:09.657677   63448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0914 18:09:09.675042   63448 ssh_runner.go:195] Run: grep 192.168.61.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:09.678885   63448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:09.694466   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:09.823504   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:09.840638   63448 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449 for IP: 192.168.61.38
	I0914 18:09:09.840658   63448 certs.go:194] generating shared ca certs ...
	I0914 18:09:09.840677   63448 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:09.840827   63448 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:09.840869   63448 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:09.840879   63448 certs.go:256] generating profile certs ...
	I0914 18:09:09.841046   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/client.key
	I0914 18:09:09.841147   63448 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key.68770133
	I0914 18:09:09.841231   63448 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key
	I0914 18:09:09.841342   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:09.841370   63448 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:09.841377   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:09.841398   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:09.841425   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:09.841447   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:09.841499   63448 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:09.842211   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:09.883406   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:09.914134   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:09.941343   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:09.990870   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0914 18:09:10.040821   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:10.065238   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:10.089901   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/default-k8s-diff-port-243449/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 18:09:10.114440   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:10.138963   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:10.162828   63448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:10.185702   63448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:10.201251   63448 ssh_runner.go:195] Run: openssl version
	I0914 18:09:10.206904   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:10.216966   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221437   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.221506   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:10.227033   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:10.237039   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:10.247244   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251434   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.251494   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:10.257187   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:10.267490   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:10.277622   63448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281705   63448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.281789   63448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:10.287013   63448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:10.296942   63448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:05.374034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:05.873992   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.374407   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:06.873737   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.373665   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.874486   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.374017   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:08.874365   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.374221   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:09.874108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:07.165576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.166187   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.166219   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.166125   64231 retry.go:31] will retry after 631.376012ms: waiting for machine to come up
	I0914 18:09:07.798978   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:07.799450   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:07.799478   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:07.799404   64231 retry.go:31] will retry after 668.764795ms: waiting for machine to come up
	I0914 18:09:08.470207   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:08.470613   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:08.470640   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:08.470559   64231 retry.go:31] will retry after 943.595216ms: waiting for machine to come up
	I0914 18:09:09.415274   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:09.415721   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:09.415751   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:09.415675   64231 retry.go:31] will retry after 956.638818ms: waiting for machine to come up
	I0914 18:09:10.374297   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:10.374875   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:10.374902   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:10.374822   64231 retry.go:31] will retry after 1.703915418s: waiting for machine to come up
	I0914 18:09:11.583370   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:14.082919   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:10.301352   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:10.307276   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:10.313391   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:10.319883   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:10.325671   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:10.331445   63448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:10.336855   63448 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-243449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-243449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:10.336953   63448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:10.337019   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.372899   63448 cri.go:89] found id: ""
	I0914 18:09:10.372988   63448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:10.386897   63448 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:10.386920   63448 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:10.386978   63448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:10.399165   63448 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:10.400212   63448 kubeconfig.go:125] found "default-k8s-diff-port-243449" server: "https://192.168.61.38:8444"
	I0914 18:09:10.402449   63448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:10.414129   63448 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.38
	I0914 18:09:10.414192   63448 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:10.414207   63448 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:10.414276   63448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:10.454549   63448 cri.go:89] found id: ""
	I0914 18:09:10.454627   63448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:10.472261   63448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:10.481693   63448 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:10.481724   63448 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:10.481772   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0914 18:09:10.492205   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:10.492283   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:10.502923   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0914 18:09:10.511620   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:10.511688   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:10.520978   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.529590   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:10.529652   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:10.538602   63448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0914 18:09:10.546968   63448 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:10.547037   63448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:10.556280   63448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:10.565471   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:10.670297   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.611646   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.858308   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:11.942761   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:12.018144   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:12.018251   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.518933   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.019098   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.518297   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.018327   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.033874   63448 api_server.go:72] duration metric: took 2.015718891s to wait for apiserver process to appear ...
	I0914 18:09:14.033902   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:14.033926   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:14.034534   63448 api_server.go:269] stopped: https://192.168.61.38:8444/healthz: Get "https://192.168.61.38:8444/healthz": dial tcp 192.168.61.38:8444: connect: connection refused
	I0914 18:09:14.534065   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:10.373394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:10.873498   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.373841   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:11.873492   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.374179   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.873586   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.374405   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:13.873518   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.374018   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:14.873905   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:12.080547   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:12.081149   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:12.081174   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:12.081095   64231 retry.go:31] will retry after 1.634645735s: waiting for machine to come up
	I0914 18:09:13.717239   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:13.717787   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:13.717821   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:13.717731   64231 retry.go:31] will retry after 2.524549426s: waiting for machine to come up
	I0914 18:09:16.244729   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:16.245132   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:16.245162   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:16.245072   64231 retry.go:31] will retry after 2.539965892s: waiting for machine to come up
	I0914 18:09:16.083603   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:18.581965   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:16.427071   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.427109   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.427156   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.440812   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:16.440848   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:16.534060   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:16.593356   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:16.593412   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.034545   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.039094   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.039131   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:17.534668   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:17.543018   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:17.543053   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.034612   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.039042   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.039071   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:18.534675   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:18.540612   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:18.540637   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.034196   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.040397   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.040429   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:19.535035   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:19.540910   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:19.540940   63448 api_server.go:103] status: https://192.168.61.38:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:20.034275   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:09:20.038541   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:09:20.044704   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:20.044734   63448 api_server.go:131] duration metric: took 6.010822563s to wait for apiserver health ...
	I0914 18:09:20.044744   63448 cni.go:84] Creating CNI manager for ""
	I0914 18:09:20.044752   63448 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:20.046616   63448 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:20.047724   63448 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:20.058152   63448 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:20.077880   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:20.090089   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:20.090135   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:20.090148   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:20.090178   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:20.090192   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:20.090199   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:09:20.090210   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:20.090219   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:20.090226   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:09:20.090236   63448 system_pods.go:74] duration metric: took 12.327834ms to wait for pod list to return data ...
	I0914 18:09:20.090248   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:20.094429   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:20.094455   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:20.094468   63448 node_conditions.go:105] duration metric: took 4.21448ms to run NodePressure ...
	I0914 18:09:20.094486   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:15.374447   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:15.873830   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.373497   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:16.874326   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.373994   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:17.873394   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.373596   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:18.874350   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.374434   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:19.873774   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.357111   63448 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361447   63448 kubeadm.go:739] kubelet initialised
	I0914 18:09:20.361469   63448 kubeadm.go:740] duration metric: took 4.331134ms waiting for restarted kubelet to initialise ...
	I0914 18:09:20.361479   63448 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:20.367027   63448 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.371669   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371697   63448 pod_ready.go:82] duration metric: took 4.644689ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.371706   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.371714   63448 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.376461   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376486   63448 pod_ready.go:82] duration metric: took 4.764316ms for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.376497   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.376506   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.380607   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380632   63448 pod_ready.go:82] duration metric: took 4.117696ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.380642   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.380649   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.481883   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481920   63448 pod_ready.go:82] duration metric: took 101.262101ms for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.481935   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.481965   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:20.881501   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881541   63448 pod_ready.go:82] duration metric: took 399.559576ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:20.881556   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-proxy-gbkqm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.881566   63448 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.282414   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282446   63448 pod_ready.go:82] duration metric: took 400.860884ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.282463   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.282472   63448 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:21.681717   63448 pod_ready.go:98] node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681757   63448 pod_ready.go:82] duration metric: took 399.273892ms for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:09:21.681773   63448 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-243449" hosting pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:21.681783   63448 pod_ready.go:39] duration metric: took 1.320292845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:21.681825   63448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:09:21.693644   63448 ops.go:34] apiserver oom_adj: -16
	I0914 18:09:21.693682   63448 kubeadm.go:597] duration metric: took 11.306754096s to restartPrimaryControlPlane
	I0914 18:09:21.693696   63448 kubeadm.go:394] duration metric: took 11.356851178s to StartCluster
	I0914 18:09:21.693719   63448 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.693820   63448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:09:21.695521   63448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:21.695793   63448 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.38 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:09:21.695903   63448 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:09:21.695982   63448 config.go:182] Loaded profile config "default-k8s-diff-port-243449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:21.696003   63448 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696021   63448 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696029   63448 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696041   63448 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:09:21.696044   63448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-243449"
	I0914 18:09:21.696063   63448 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-243449"
	I0914 18:09:21.696094   63448 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.696108   63448 addons.go:243] addon metrics-server should already be in state true
	I0914 18:09:21.696149   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696074   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.696411   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696455   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696543   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696562   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.696693   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.696735   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.697719   63448 out.go:177] * Verifying Kubernetes components...
	I0914 18:09:21.699171   63448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:21.712479   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0914 18:09:21.712563   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0914 18:09:21.713050   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713065   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.713585   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713601   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.713613   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713633   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.713940   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714122   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.714135   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.714737   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.714789   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.716503   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33627
	I0914 18:09:21.716977   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.717490   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.717514   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.717872   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.718055   63448 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-243449"
	W0914 18:09:21.718075   63448 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:09:21.718105   63448 host.go:66] Checking if "default-k8s-diff-port-243449" exists ...
	I0914 18:09:21.718432   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718484   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.718491   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.718527   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.737248   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0914 18:09:21.738874   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.739437   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.739460   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.739865   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.740121   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.742251   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.744281   63448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:21.745631   63448 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:21.745656   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:09:21.745682   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.749856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750398   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.750424   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.750659   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.750886   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.751029   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.751187   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.756605   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0914 18:09:21.756825   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0914 18:09:21.757040   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757293   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.757562   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.757588   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758058   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.758301   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.758322   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.758325   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.758717   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.759300   63448 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:09:21.759342   63448 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:09:21.760557   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.762845   63448 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:09:18.787883   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:18.788270   62207 main.go:141] libmachine: (no-preload-168587) DBG | unable to find current IP address of domain no-preload-168587 in network mk-no-preload-168587
	I0914 18:09:18.788298   62207 main.go:141] libmachine: (no-preload-168587) DBG | I0914 18:09:18.788225   64231 retry.go:31] will retry after 4.53698887s: waiting for machine to come up
	I0914 18:09:21.764071   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:09:21.764092   63448 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:09:21.764116   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.767725   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768255   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.768367   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.768503   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.768681   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.768856   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.769030   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.776783   63448 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0914 18:09:21.777226   63448 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:09:21.777736   63448 main.go:141] libmachine: Using API Version  1
	I0914 18:09:21.777754   63448 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:09:21.778113   63448 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:09:21.778345   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetState
	I0914 18:09:21.780215   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .DriverName
	I0914 18:09:21.780421   63448 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:21.780436   63448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:09:21.780458   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHHostname
	I0914 18:09:21.783243   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783671   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:0b:a7", ip: ""} in network mk-default-k8s-diff-port-243449: {Iface:virbr4 ExpiryTime:2024-09-14 19:08:56 +0000 UTC Type:0 Mac:52:54:00:6e:0b:a7 Iaid: IPaddr:192.168.61.38 Prefix:24 Hostname:default-k8s-diff-port-243449 Clientid:01:52:54:00:6e:0b:a7}
	I0914 18:09:21.783698   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | domain default-k8s-diff-port-243449 has defined IP address 192.168.61.38 and MAC address 52:54:00:6e:0b:a7 in network mk-default-k8s-diff-port-243449
	I0914 18:09:21.783857   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHPort
	I0914 18:09:21.784023   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHKeyPath
	I0914 18:09:21.784138   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .GetSSHUsername
	I0914 18:09:21.784256   63448 sshutil.go:53] new ssh client: &{IP:192.168.61.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/default-k8s-diff-port-243449/id_rsa Username:docker}
	I0914 18:09:21.919649   63448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:21.945515   63448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:22.020487   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:09:22.020509   63448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:09:22.041265   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:09:22.072169   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:09:22.072199   63448 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:09:22.112117   63448 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.112148   63448 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:09:22.146636   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:09:22.162248   63448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:09:22.520416   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520448   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.520793   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.520815   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.520831   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.520833   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.520840   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.521074   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.521119   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:22.527992   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:22.528030   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:22.528578   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:22.528581   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:22.528605   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246463   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084175525s)
	I0914 18:09:23.246520   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246535   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246564   63448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.099889297s)
	I0914 18:09:23.246609   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246621   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246835   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246876   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.246888   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.246897   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.246910   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.246944   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.246958   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247002   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247021   63448 main.go:141] libmachine: Making call to close driver server
	I0914 18:09:23.247046   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) Calling .Close
	I0914 18:09:23.247156   63448 main.go:141] libmachine: (default-k8s-diff-port-243449) DBG | Closing plugin on server side
	I0914 18:09:23.247192   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247227   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:09:23.247260   63448 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:09:23.247241   63448 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-243449"
	I0914 18:09:23.250385   63448 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0914 18:09:20.583600   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.083187   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:23.251609   63448 addons.go:510] duration metric: took 1.555716144s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0914 18:09:23.949715   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:20.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:20.874167   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.374108   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:21.873539   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.374451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:22.874481   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.374533   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.873433   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.374284   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:24.873466   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:23.327287   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327775   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has current primary IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.327803   62207 main.go:141] libmachine: (no-preload-168587) Found IP for machine: 192.168.39.38
	I0914 18:09:23.327822   62207 main.go:141] libmachine: (no-preload-168587) Reserving static IP address...
	I0914 18:09:23.328197   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.328221   62207 main.go:141] libmachine: (no-preload-168587) Reserved static IP address: 192.168.39.38
	I0914 18:09:23.328264   62207 main.go:141] libmachine: (no-preload-168587) DBG | skip adding static IP to network mk-no-preload-168587 - found existing host DHCP lease matching {name: "no-preload-168587", mac: "52:54:00:4c:40:8a", ip: "192.168.39.38"}
	I0914 18:09:23.328283   62207 main.go:141] libmachine: (no-preload-168587) DBG | Getting to WaitForSSH function...
	I0914 18:09:23.328295   62207 main.go:141] libmachine: (no-preload-168587) Waiting for SSH to be available...
	I0914 18:09:23.330598   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.330954   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.330985   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.331105   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH client type: external
	I0914 18:09:23.331132   62207 main.go:141] libmachine: (no-preload-168587) DBG | Using SSH private key: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa (-rw-------)
	I0914 18:09:23.331184   62207 main.go:141] libmachine: (no-preload-168587) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 18:09:23.331196   62207 main.go:141] libmachine: (no-preload-168587) DBG | About to run SSH command:
	I0914 18:09:23.331208   62207 main.go:141] libmachine: (no-preload-168587) DBG | exit 0
	I0914 18:09:23.454525   62207 main.go:141] libmachine: (no-preload-168587) DBG | SSH cmd err, output: <nil>: 
	I0914 18:09:23.454883   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetConfigRaw
	I0914 18:09:23.455505   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.457696   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458030   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.458069   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.458372   62207 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/config.json ...
	I0914 18:09:23.458611   62207 machine.go:93] provisionDockerMachine start ...
	I0914 18:09:23.458633   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:23.458828   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.461199   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461540   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.461576   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.461705   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.461895   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462006   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.462153   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.462314   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.462477   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.462488   62207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:09:23.566278   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0914 18:09:23.566310   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566559   62207 buildroot.go:166] provisioning hostname "no-preload-168587"
	I0914 18:09:23.566581   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.566742   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.569254   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569590   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.569617   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.569713   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.569888   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570009   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.570174   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.570344   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.570556   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.570575   62207 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-168587 && echo "no-preload-168587" | sudo tee /etc/hostname
	I0914 18:09:23.687805   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-168587
	
	I0914 18:09:23.687848   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.690447   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.690824   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.690955   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:23.691135   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691279   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:23.691427   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:23.691590   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:23.691768   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:23.691790   62207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-168587' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-168587/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-168587' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:09:23.805502   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:09:23.805527   62207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19643-8806/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-8806/.minikube}
	I0914 18:09:23.805545   62207 buildroot.go:174] setting up certificates
	I0914 18:09:23.805553   62207 provision.go:84] configureAuth start
	I0914 18:09:23.805561   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetMachineName
	I0914 18:09:23.805798   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:23.808306   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808643   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.808668   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.808819   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:23.811055   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811374   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:23.811401   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:23.811586   62207 provision.go:143] copyHostCerts
	I0914 18:09:23.811647   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem, removing ...
	I0914 18:09:23.811657   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem
	I0914 18:09:23.811712   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/ca.pem (1082 bytes)
	I0914 18:09:23.811800   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem, removing ...
	I0914 18:09:23.811808   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem
	I0914 18:09:23.811829   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/cert.pem (1123 bytes)
	I0914 18:09:23.811880   62207 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem, removing ...
	I0914 18:09:23.811887   62207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem
	I0914 18:09:23.811908   62207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-8806/.minikube/key.pem (1675 bytes)
	I0914 18:09:23.811956   62207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem org=jenkins.no-preload-168587 san=[127.0.0.1 192.168.39.38 localhost minikube no-preload-168587]
	I0914 18:09:24.051868   62207 provision.go:177] copyRemoteCerts
	I0914 18:09:24.051936   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:09:24.051958   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.054842   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055107   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.055138   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.055321   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.055514   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.055664   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.055804   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.140378   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:09:24.168422   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 18:09:24.194540   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:09:24.217910   62207 provision.go:87] duration metric: took 412.343545ms to configureAuth
	I0914 18:09:24.217942   62207 buildroot.go:189] setting minikube options for container-runtime
	I0914 18:09:24.218180   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:09:24.218255   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.220788   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221216   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.221259   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.221408   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.221678   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.221842   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.222033   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.222218   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.222399   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.222417   62207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 18:09:24.433203   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 18:09:24.433230   62207 machine.go:96] duration metric: took 974.605605ms to provisionDockerMachine
	I0914 18:09:24.433241   62207 start.go:293] postStartSetup for "no-preload-168587" (driver="kvm2")
	I0914 18:09:24.433253   62207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:09:24.433282   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.433595   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:09:24.433625   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.436247   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436710   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.436746   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.436855   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.437015   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.437189   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.437305   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.516493   62207 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:09:24.520486   62207 info.go:137] Remote host: Buildroot 2023.02.9
	I0914 18:09:24.520518   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/addons for local assets ...
	I0914 18:09:24.520612   62207 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8806/.minikube/files for local assets ...
	I0914 18:09:24.520687   62207 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem -> 160162.pem in /etc/ssl/certs
	I0914 18:09:24.520775   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:09:24.530274   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:24.553381   62207 start.go:296] duration metric: took 120.123302ms for postStartSetup
	I0914 18:09:24.553422   62207 fix.go:56] duration metric: took 20.086564499s for fixHost
	I0914 18:09:24.553445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.555832   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556100   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.556133   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.556376   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.556605   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556772   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.556922   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.557062   62207 main.go:141] libmachine: Using SSH client type: native
	I0914 18:09:24.557275   62207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0914 18:09:24.557285   62207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 18:09:24.659101   62207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726337364.632455119
	
	I0914 18:09:24.659128   62207 fix.go:216] guest clock: 1726337364.632455119
	I0914 18:09:24.659139   62207 fix.go:229] Guest: 2024-09-14 18:09:24.632455119 +0000 UTC Remote: 2024-09-14 18:09:24.553426386 +0000 UTC m=+357.567907862 (delta=79.028733ms)
	I0914 18:09:24.659165   62207 fix.go:200] guest clock delta is within tolerance: 79.028733ms
	I0914 18:09:24.659171   62207 start.go:83] releasing machines lock for "no-preload-168587", held for 20.192350446s
	I0914 18:09:24.659209   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.659445   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:24.662626   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663051   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.663082   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.663225   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663802   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.663972   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:09:24.664063   62207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:09:24.664114   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.664195   62207 ssh_runner.go:195] Run: cat /version.json
	I0914 18:09:24.664221   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:09:24.666971   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667255   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667398   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667433   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667555   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.667753   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.667787   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:24.667816   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:24.667913   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.667988   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:09:24.668058   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.668109   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:09:24.668236   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:09:24.668356   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:09:24.743805   62207 ssh_runner.go:195] Run: systemctl --version
	I0914 18:09:24.776583   62207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 18:09:24.924635   62207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0914 18:09:24.930891   62207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 18:09:24.930979   62207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:09:24.952228   62207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 18:09:24.952258   62207 start.go:495] detecting cgroup driver to use...
	I0914 18:09:24.952344   62207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 18:09:24.967770   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 18:09:24.983218   62207 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:09:24.983280   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:09:24.997311   62207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:09:25.011736   62207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:09:25.135920   62207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:09:25.323727   62207 docker.go:233] disabling docker service ...
	I0914 18:09:25.323793   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:09:25.341243   62207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:09:25.358703   62207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:09:25.495826   62207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:09:25.621684   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:09:25.637386   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:09:25.655826   62207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 18:09:25.655947   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.669204   62207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 18:09:25.669266   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.680265   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.690860   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.702002   62207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:09:25.713256   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.724125   62207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.742195   62207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 18:09:25.752680   62207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:09:25.762842   62207 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0914 18:09:25.762920   62207 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0914 18:09:25.775680   62207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:09:25.785190   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:25.907175   62207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 18:09:25.995654   62207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 18:09:25.995731   62207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 18:09:26.000829   62207 start.go:563] Will wait 60s for crictl version
	I0914 18:09:26.000896   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.004522   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:09:26.041674   62207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0914 18:09:26.041745   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.069091   62207 ssh_runner.go:195] Run: crio --version
	I0914 18:09:26.107475   62207 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0914 18:09:26.108650   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetIP
	I0914 18:09:26.111782   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112110   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:09:26.112139   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:09:26.112279   62207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 18:09:26.116339   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:26.128616   62207 kubeadm.go:883] updating cluster {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:09:26.128755   62207 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 18:09:26.128796   62207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:09:26.165175   62207 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0914 18:09:26.165197   62207 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0914 18:09:26.165282   62207 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.165301   62207 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0914 18:09:26.165302   62207 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.165276   62207 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.165346   62207 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.165309   62207 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.165443   62207 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.165451   62207 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.166853   62207 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0914 18:09:26.166858   62207 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.166864   62207 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.166873   62207 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:26.166852   62207 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.166911   62207 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.166928   62207 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.366393   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.398127   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0914 18:09:26.401173   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.405861   62207 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0914 18:09:26.405910   62207 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.405982   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.410513   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.411414   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.416692   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.417710   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643066   62207 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0914 18:09:26.643114   62207 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.643177   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643195   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.643242   62207 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0914 18:09:26.643278   62207 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0914 18:09:26.643293   62207 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0914 18:09:26.643282   62207 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.643307   62207 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.643323   62207 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.643328   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643351   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643366   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.643386   62207 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0914 18:09:26.643412   62207 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.643436   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:26.654984   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.655035   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.655016   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.733881   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.733967   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.769624   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.778708   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.778836   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.778855   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.821344   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0914 18:09:26.821358   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.899012   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0914 18:09:26.906693   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0914 18:09:26.909875   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0914 18:09:26.916458   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0914 18:09:26.944355   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0914 18:09:26.949250   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0914 18:09:26.949400   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:25.582447   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:28.084142   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:25.949851   63448 node_ready.go:53] node "default-k8s-diff-port-243449" has status "Ready":"False"
	I0914 18:09:26.950390   63448 node_ready.go:49] node "default-k8s-diff-port-243449" has status "Ready":"True"
	I0914 18:09:26.950418   63448 node_ready.go:38] duration metric: took 5.004868966s for node "default-k8s-diff-port-243449" to be "Ready" ...
	I0914 18:09:26.950430   63448 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:26.956875   63448 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963909   63448 pod_ready.go:93] pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:26.963935   63448 pod_ready.go:82] duration metric: took 7.027533ms for pod "coredns-7c65d6cfc9-8v8s7" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:26.963945   63448 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971297   63448 pod_ready.go:93] pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.971327   63448 pod_ready.go:82] duration metric: took 2.007374825s for pod "etcd-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.971340   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977510   63448 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:28.977535   63448 pod_ready.go:82] duration metric: took 6.18573ms for pod "kube-apiserver-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:28.977557   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:25.374144   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:25.874109   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.374422   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:26.873444   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.373615   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.873395   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.373886   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:28.873510   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.374027   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:29.873502   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:27.035840   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0914 18:09:27.035956   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:27.040828   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0914 18:09:27.040939   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0914 18:09:27.040941   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:27.041026   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:27.048278   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0914 18:09:27.048345   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0914 18:09:27.048388   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:27.048390   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0914 18:09:27.048446   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048423   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0914 18:09:27.048482   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0914 18:09:27.048431   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:27.052221   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0914 18:09:27.052401   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0914 18:09:27.052585   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0914 18:09:27.330779   62207 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.721998   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.673483443s)
	I0914 18:09:29.722035   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0914 18:09:29.722064   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722076   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.673496811s)
	I0914 18:09:29.722112   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0914 18:09:29.722112   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0914 18:09:29.722194   62207 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.391387893s)
	I0914 18:09:29.722236   62207 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0914 18:09:29.722257   62207 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:29.722297   62207 ssh_runner.go:195] Run: which crictl
	I0914 18:09:31.485714   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.76356866s)
	I0914 18:09:31.485744   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0914 18:09:31.485764   62207 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485817   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0914 18:09:31.485820   62207 ssh_runner.go:235] Completed: which crictl: (1.763506603s)
	I0914 18:09:31.485862   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:30.583013   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:33.083597   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.985230   63448 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:31.984182   63448 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.984203   63448 pod_ready.go:82] duration metric: took 3.006637599s for pod "kube-controller-manager-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.984212   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989786   63448 pod_ready.go:93] pod "kube-proxy-gbkqm" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.989812   63448 pod_ready.go:82] duration metric: took 5.592466ms for pod "kube-proxy-gbkqm" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.989823   63448 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994224   63448 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:31.994246   63448 pod_ready.go:82] duration metric: took 4.414059ms for pod "kube-scheduler-default-k8s-diff-port-243449" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:31.994258   63448 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:34.001035   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:30.373878   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:30.874351   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.373651   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:31.873914   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.373522   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:32.874439   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.373991   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:33.874056   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.373566   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.874140   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:34.781678   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.295763296s)
	I0914 18:09:34.781783   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:34.781814   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.295968995s)
	I0914 18:09:34.781840   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0914 18:09:34.781868   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:34.781900   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0914 18:09:36.744459   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.962646981s)
	I0914 18:09:36.744514   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.962587733s)
	I0914 18:09:36.744551   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0914 18:09:36.744576   62207 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:09:36.744590   62207 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:36.744658   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0914 18:09:35.582596   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.083260   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:36.002284   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:38.002962   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:35.374151   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:35.873725   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.373500   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:36.873617   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.373826   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:37.874068   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.373459   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.873666   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.373936   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:39.873551   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:38.848091   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.103407014s)
	I0914 18:09:38.848126   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0914 18:09:38.848152   62207 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848217   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0914 18:09:38.848153   62207 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.103554199s)
	I0914 18:09:38.848283   62207 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0914 18:09:38.848368   62207 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307247   62207 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.459002378s)
	I0914 18:09:40.307287   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0914 18:09:40.307269   62207 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.458886581s)
	I0914 18:09:40.307327   62207 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0914 18:09:40.307334   62207 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.307382   62207 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0914 18:09:40.958177   62207 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19643-8806/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0914 18:09:40.958222   62207 cache_images.go:123] Successfully loaded all cached images
	I0914 18:09:40.958228   62207 cache_images.go:92] duration metric: took 14.793018447s to LoadCachedImages
	I0914 18:09:40.958241   62207 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.31.1 crio true true} ...
	I0914 18:09:40.958347   62207 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-168587 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:09:40.958415   62207 ssh_runner.go:195] Run: crio config
	I0914 18:09:41.003620   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:41.003643   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:41.003653   62207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:09:41.003674   62207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-168587 NodeName:no-preload-168587 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 18:09:41.003850   62207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-168587"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:09:41.003920   62207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 18:09:41.014462   62207 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:09:41.014541   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:09:41.023964   62207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0914 18:09:41.040206   62207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:09:41.055630   62207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0914 18:09:41.072881   62207 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0914 18:09:41.076449   62207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:09:41.090075   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:09:41.210405   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:09:41.228173   62207 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587 for IP: 192.168.39.38
	I0914 18:09:41.228197   62207 certs.go:194] generating shared ca certs ...
	I0914 18:09:41.228213   62207 certs.go:226] acquiring lock for ca certs: {Name:mkb663a3180967f5f94f0c355b2cd55067394331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:09:41.228383   62207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key
	I0914 18:09:41.228443   62207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key
	I0914 18:09:41.228457   62207 certs.go:256] generating profile certs ...
	I0914 18:09:41.228586   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.key
	I0914 18:09:41.228667   62207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key.d11ec263
	I0914 18:09:41.228731   62207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key
	I0914 18:09:41.228889   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem (1338 bytes)
	W0914 18:09:41.228932   62207 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016_empty.pem, impossibly tiny 0 bytes
	I0914 18:09:41.228944   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:09:41.228976   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:09:41.229008   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:09:41.229045   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/certs/key.pem (1675 bytes)
	I0914 18:09:41.229102   62207 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem (1708 bytes)
	I0914 18:09:41.229913   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:09:41.259871   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:09:41.286359   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:09:41.315410   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 18:09:41.345541   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 18:09:41.380128   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:09:41.411130   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:09:41.442136   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:09:41.464823   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:09:41.488153   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/certs/16016.pem --> /usr/share/ca-certificates/16016.pem (1338 bytes)
	I0914 18:09:41.513788   62207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/ssl/certs/160162.pem --> /usr/share/ca-certificates/160162.pem (1708 bytes)
	I0914 18:09:41.537256   62207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:09:41.553550   62207 ssh_runner.go:195] Run: openssl version
	I0914 18:09:41.559366   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/160162.pem && ln -fs /usr/share/ca-certificates/160162.pem /etc/ssl/certs/160162.pem"
	I0914 18:09:41.571498   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576889   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:01 /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.576947   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/160162.pem
	I0914 18:09:41.583651   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/160162.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:09:41.594743   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:09:41.605811   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610034   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.610103   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:09:41.615810   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:09:41.627145   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16016.pem && ln -fs /usr/share/ca-certificates/16016.pem /etc/ssl/certs/16016.pem"
	I0914 18:09:41.639956   62207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644647   62207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:01 /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.644705   62207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16016.pem
	I0914 18:09:41.650281   62207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16016.pem /etc/ssl/certs/51391683.0"
	I0914 18:09:41.662354   62207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:09:41.667150   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:09:41.673263   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:09:41.680660   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:09:41.687283   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:09:41.693256   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:09:41.698969   62207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:09:41.704543   62207 kubeadm.go:392] StartCluster: {Name:no-preload-168587 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-168587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:09:41.704671   62207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 18:09:41.704750   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.741255   62207 cri.go:89] found id: ""
	I0914 18:09:41.741354   62207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:09:41.751360   62207 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:09:41.751377   62207 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:09:41.751417   62207 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:09:41.761492   62207 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:09:41.762591   62207 kubeconfig.go:125] found "no-preload-168587" server: "https://192.168.39.38:8443"
	I0914 18:09:41.764876   62207 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:09:41.774868   62207 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0914 18:09:41.774901   62207 kubeadm.go:1160] stopping kube-system containers ...
	I0914 18:09:41.774913   62207 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0914 18:09:41.774969   62207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:09:41.810189   62207 cri.go:89] found id: ""
	I0914 18:09:41.810248   62207 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 18:09:41.827903   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:09:41.837504   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:09:41.837532   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:09:41.837585   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:09:41.846260   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:09:41.846322   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:09:41.855350   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:09:41.864096   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:09:41.864153   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:09:41.874772   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.885427   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:09:41.885502   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:09:41.897121   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:09:41.906955   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:09:41.907020   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:09:41.918253   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:09:41.930134   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:40.084800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:42.581757   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:44.583611   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.502272   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:43.001471   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:40.374231   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:40.873955   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.374306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:41.873511   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.373419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.874077   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.374329   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.873782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.373478   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.874120   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:42.054830   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.754174   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:42.973037   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.043041   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:43.119704   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:09:43.119805   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:43.620541   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.120849   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:44.139382   62207 api_server.go:72] duration metric: took 1.019679094s to wait for apiserver process to appear ...
	I0914 18:09:44.139406   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:09:44.139424   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:44.139876   62207 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0914 18:09:44.639981   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.262096   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.262132   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.262151   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.280626   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 18:09:47.280652   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 18:09:47.640152   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:47.646640   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:47.646676   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.140256   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.145520   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 18:09:48.145557   62207 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 18:09:48.640147   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:09:48.645032   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:09:48.654567   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:09:48.654600   62207 api_server.go:131] duration metric: took 4.515188826s to wait for apiserver health ...
	I0914 18:09:48.654609   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:09:48.654615   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:09:48.656828   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:09:47.082431   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:49.582001   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.500938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:48.002332   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:45.374173   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:45.873537   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.373462   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:46.874196   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.374297   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:47.874112   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.373627   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.873473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.374289   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:49.873411   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:48.658151   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:09:48.692232   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:09:48.734461   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:09:48.746689   62207 system_pods.go:59] 8 kube-system pods found
	I0914 18:09:48.746723   62207 system_pods.go:61] "coredns-7c65d6cfc9-mwhvh" [38800077-a7ff-4c8c-8375-4efac2ae40b8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:09:48.746733   62207 system_pods.go:61] "etcd-no-preload-168587" [bdb166bb-8c07-448c-a97c-2146e84f139b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 18:09:48.746744   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [8ad59d56-cb86-4028-bf16-3733eb32ad8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 18:09:48.746752   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [fd66d0aa-cc35-4330-aa6b-571dbeaa6490] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 18:09:48.746761   62207 system_pods.go:61] "kube-proxy-lvp9h" [75c154d8-c76d-49eb-9497-dd17199e9d20] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 18:09:48.746771   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [858c948b-9025-48ab-907a-5b69aefbb24c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 18:09:48.746782   62207 system_pods.go:61] "metrics-server-6867b74b74-n276z" [69e25ed4-dc8e-4c68-955e-e7226d066ac4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:09:48.746790   62207 system_pods.go:61] "storage-provisioner" [41c92694-2d3a-4025-8e28-ddea7b9b9c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 18:09:48.746801   62207 system_pods.go:74] duration metric: took 12.315296ms to wait for pod list to return data ...
	I0914 18:09:48.746811   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:09:48.751399   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:09:48.751428   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:09:48.751440   62207 node_conditions.go:105] duration metric: took 4.625335ms to run NodePressure ...
	I0914 18:09:48.751461   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 18:09:49.051211   62207 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057333   62207 kubeadm.go:739] kubelet initialised
	I0914 18:09:49.057366   62207 kubeadm.go:740] duration metric: took 6.124032ms waiting for restarted kubelet to initialise ...
	I0914 18:09:49.057379   62207 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:09:49.062570   62207 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:51.069219   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:51.588043   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:54.082931   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.499759   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:52.502450   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.000767   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:50.374229   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:50.873429   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.373547   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:51.874090   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.373513   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:52.874222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.374123   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.873893   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.373451   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:54.873583   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:53.069338   62207 pod_ready.go:103] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:53.570290   62207 pod_ready.go:93] pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:53.570323   62207 pod_ready.go:82] duration metric: took 4.507716999s for pod "coredns-7c65d6cfc9-mwhvh" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:53.570333   62207 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:55.577317   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:56.581937   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:58.583632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:57.000913   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.001429   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:55.374078   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:55.873810   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:55.873965   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:55.913981   62996 cri.go:89] found id: ""
	I0914 18:09:55.914011   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.914023   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:55.914030   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:55.914091   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:55.948423   62996 cri.go:89] found id: ""
	I0914 18:09:55.948459   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.948467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:55.948472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:55.948530   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:55.986470   62996 cri.go:89] found id: ""
	I0914 18:09:55.986507   62996 logs.go:276] 0 containers: []
	W0914 18:09:55.986520   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:55.986530   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:55.986598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:56.022172   62996 cri.go:89] found id: ""
	I0914 18:09:56.022200   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.022214   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:56.022220   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:56.022267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:56.065503   62996 cri.go:89] found id: ""
	I0914 18:09:56.065552   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.065564   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:56.065572   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:56.065632   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:56.101043   62996 cri.go:89] found id: ""
	I0914 18:09:56.101072   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.101082   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:56.101089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:56.101156   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:56.133820   62996 cri.go:89] found id: ""
	I0914 18:09:56.133852   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.133864   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:56.133872   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:56.133925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:56.172334   62996 cri.go:89] found id: ""
	I0914 18:09:56.172358   62996 logs.go:276] 0 containers: []
	W0914 18:09:56.172369   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:56.172380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:56.172398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:56.186476   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:56.186513   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:56.308336   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:56.308366   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:56.308388   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:56.386374   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:56.386410   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:56.426333   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:56.426360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:58.978306   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:09:58.991093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:09:58.991175   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:09:59.029861   62996 cri.go:89] found id: ""
	I0914 18:09:59.029890   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.029899   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:09:59.029905   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:09:59.029962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:09:59.067744   62996 cri.go:89] found id: ""
	I0914 18:09:59.067772   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.067783   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:09:59.067791   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:09:59.067973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:09:59.105666   62996 cri.go:89] found id: ""
	I0914 18:09:59.105695   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.105707   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:09:59.105714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:09:59.105796   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:09:59.153884   62996 cri.go:89] found id: ""
	I0914 18:09:59.153916   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.153929   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:09:59.153937   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:09:59.154007   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:09:59.191462   62996 cri.go:89] found id: ""
	I0914 18:09:59.191492   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.191503   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:09:59.191509   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:09:59.191574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:09:59.246299   62996 cri.go:89] found id: ""
	I0914 18:09:59.246326   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.246336   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:09:59.246357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:09:59.246413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:09:59.292821   62996 cri.go:89] found id: ""
	I0914 18:09:59.292847   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.292856   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:09:59.292862   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:09:59.292918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:09:59.334130   62996 cri.go:89] found id: ""
	I0914 18:09:59.334176   62996 logs.go:276] 0 containers: []
	W0914 18:09:59.334187   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:09:59.334198   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:09:59.334211   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:09:59.386847   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:09:59.386884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:09:59.400163   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:09:59.400193   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:09:59.476375   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:09:59.476400   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:09:59.476416   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:09:59.554564   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:09:59.554599   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:09:57.578803   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:09:59.576525   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:09:59.576547   62207 pod_ready.go:82] duration metric: took 6.006207927s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:09:59.576556   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084027   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.084054   62207 pod_ready.go:82] duration metric: took 507.490867ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.084067   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089044   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.089068   62207 pod_ready.go:82] duration metric: took 4.991847ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.089079   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093160   62207 pod_ready.go:93] pod "kube-proxy-lvp9h" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.093179   62207 pod_ready.go:82] duration metric: took 4.093257ms for pod "kube-proxy-lvp9h" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.093198   62207 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096786   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:10:00.096800   62207 pod_ready.go:82] duration metric: took 3.594996ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:00.096807   62207 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	I0914 18:10:01.082601   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:03.581290   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:01.502864   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.001645   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:02.095079   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:02.108933   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:02.109003   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:02.141838   62996 cri.go:89] found id: ""
	I0914 18:10:02.141861   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.141869   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:02.141875   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:02.141934   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:02.176437   62996 cri.go:89] found id: ""
	I0914 18:10:02.176460   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.176467   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:02.176472   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:02.176516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:02.210341   62996 cri.go:89] found id: ""
	I0914 18:10:02.210369   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.210381   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:02.210388   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:02.210434   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:02.243343   62996 cri.go:89] found id: ""
	I0914 18:10:02.243373   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.243384   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:02.243391   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:02.243461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.276630   62996 cri.go:89] found id: ""
	I0914 18:10:02.276657   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.276668   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:02.276675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:02.276736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:02.311626   62996 cri.go:89] found id: ""
	I0914 18:10:02.311659   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.311674   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:02.311682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:02.311748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:02.345868   62996 cri.go:89] found id: ""
	I0914 18:10:02.345892   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.345901   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:02.345908   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:02.345966   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:02.380111   62996 cri.go:89] found id: ""
	I0914 18:10:02.380139   62996 logs.go:276] 0 containers: []
	W0914 18:10:02.380147   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:02.380156   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:02.380167   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:02.421061   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:02.421094   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:02.474596   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:02.474633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:02.487460   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:02.487491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:02.554178   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:02.554206   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:02.554218   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:05.138863   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:05.152233   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:05.152299   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:05.187891   62996 cri.go:89] found id: ""
	I0914 18:10:05.187918   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.187929   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:05.187936   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:05.188000   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:05.231634   62996 cri.go:89] found id: ""
	I0914 18:10:05.231667   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.231679   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:05.231686   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:05.231748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:05.273445   62996 cri.go:89] found id: ""
	I0914 18:10:05.273469   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.273478   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:05.273492   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:05.273551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:05.308168   62996 cri.go:89] found id: ""
	I0914 18:10:05.308205   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.308216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:05.308224   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:05.308285   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:02.103118   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:04.103451   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.603049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.582900   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.083020   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:06.500670   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:08.500752   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:05.343292   62996 cri.go:89] found id: ""
	I0914 18:10:05.343325   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.343336   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:05.343343   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:05.343404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:05.380420   62996 cri.go:89] found id: ""
	I0914 18:10:05.380445   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.380452   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:05.380458   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:05.380503   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:05.415585   62996 cri.go:89] found id: ""
	I0914 18:10:05.415609   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.415617   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:05.415623   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:05.415687   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:05.457170   62996 cri.go:89] found id: ""
	I0914 18:10:05.457198   62996 logs.go:276] 0 containers: []
	W0914 18:10:05.457208   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:05.457219   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:05.457234   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:05.495647   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:05.495681   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:05.543775   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:05.543813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:05.556717   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:05.556750   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:05.624690   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:05.624713   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:05.624728   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.205292   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:08.217720   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:08.217786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:08.250560   62996 cri.go:89] found id: ""
	I0914 18:10:08.250590   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.250598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:08.250604   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:08.250669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:08.282085   62996 cri.go:89] found id: ""
	I0914 18:10:08.282115   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.282123   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:08.282129   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:08.282202   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:08.314350   62996 cri.go:89] found id: ""
	I0914 18:10:08.314379   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.314391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:08.314398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:08.314461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:08.347672   62996 cri.go:89] found id: ""
	I0914 18:10:08.347703   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.347714   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:08.347721   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:08.347780   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:08.385583   62996 cri.go:89] found id: ""
	I0914 18:10:08.385616   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.385628   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:08.385636   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:08.385717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:08.421135   62996 cri.go:89] found id: ""
	I0914 18:10:08.421166   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.421176   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:08.421184   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:08.421242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:08.456784   62996 cri.go:89] found id: ""
	I0914 18:10:08.456811   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.456821   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:08.456828   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:08.456890   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:08.491658   62996 cri.go:89] found id: ""
	I0914 18:10:08.491690   62996 logs.go:276] 0 containers: []
	W0914 18:10:08.491698   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:08.491707   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:08.491718   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:08.544008   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:08.544045   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:08.557780   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:08.557813   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:08.631319   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:08.631354   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:08.631371   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:08.709845   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:08.709882   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:08.604603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.103035   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:10.581739   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:12.582523   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:14.582676   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.000857   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:13.000915   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.001474   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:11.248034   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:11.261403   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:11.261471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:11.294260   62996 cri.go:89] found id: ""
	I0914 18:10:11.294287   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.294298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:11.294305   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:11.294376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:11.326784   62996 cri.go:89] found id: ""
	I0914 18:10:11.326811   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.326822   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:11.326829   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:11.326878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:11.359209   62996 cri.go:89] found id: ""
	I0914 18:10:11.359234   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.359242   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:11.359247   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:11.359316   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:11.393800   62996 cri.go:89] found id: ""
	I0914 18:10:11.393828   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.393836   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:11.393842   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:11.393889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:11.425772   62996 cri.go:89] found id: ""
	I0914 18:10:11.425798   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.425808   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:11.425815   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:11.425877   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:11.464139   62996 cri.go:89] found id: ""
	I0914 18:10:11.464165   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.464174   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:11.464180   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:11.464230   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:11.498822   62996 cri.go:89] found id: ""
	I0914 18:10:11.498848   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.498859   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:11.498869   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:11.498925   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:11.532591   62996 cri.go:89] found id: ""
	I0914 18:10:11.532623   62996 logs.go:276] 0 containers: []
	W0914 18:10:11.532634   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:11.532646   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:11.532660   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:11.608873   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:11.608892   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:11.608903   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:11.684622   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:11.684663   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:11.726639   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:11.726667   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:11.780380   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:11.780415   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.294514   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:14.308716   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:14.308779   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:14.348399   62996 cri.go:89] found id: ""
	I0914 18:10:14.348423   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.348431   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:14.348437   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:14.348485   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:14.387040   62996 cri.go:89] found id: ""
	I0914 18:10:14.387071   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.387082   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:14.387088   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:14.387144   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:14.424704   62996 cri.go:89] found id: ""
	I0914 18:10:14.424733   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.424741   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:14.424746   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:14.424793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:14.464395   62996 cri.go:89] found id: ""
	I0914 18:10:14.464431   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.464442   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:14.464450   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:14.464511   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:14.495895   62996 cri.go:89] found id: ""
	I0914 18:10:14.495921   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.495931   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:14.495938   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:14.496001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:14.532877   62996 cri.go:89] found id: ""
	I0914 18:10:14.532904   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.532914   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:14.532921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:14.532987   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:14.568381   62996 cri.go:89] found id: ""
	I0914 18:10:14.568408   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.568423   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:14.568430   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:14.568491   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:14.603867   62996 cri.go:89] found id: ""
	I0914 18:10:14.603897   62996 logs.go:276] 0 containers: []
	W0914 18:10:14.603908   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:14.603917   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:14.603933   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:14.616681   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:14.616705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:14.687817   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:14.687852   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:14.687866   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:14.761672   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:14.761714   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:14.802676   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:14.802705   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:13.103818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:15.602921   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.082737   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:19.082771   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.501947   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.000464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:17.353218   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:17.366139   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:17.366224   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:17.404478   62996 cri.go:89] found id: ""
	I0914 18:10:17.404511   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.404522   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:17.404530   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:17.404608   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:17.437553   62996 cri.go:89] found id: ""
	I0914 18:10:17.437579   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.437588   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:17.437593   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:17.437648   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:17.473815   62996 cri.go:89] found id: ""
	I0914 18:10:17.473842   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.473850   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:17.473855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:17.473919   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:17.518593   62996 cri.go:89] found id: ""
	I0914 18:10:17.518617   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.518625   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:17.518631   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:17.518679   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:17.554631   62996 cri.go:89] found id: ""
	I0914 18:10:17.554663   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.554675   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:17.554682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:17.554742   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:17.591485   62996 cri.go:89] found id: ""
	I0914 18:10:17.591512   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.591520   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:17.591525   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:17.591582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:17.629883   62996 cri.go:89] found id: ""
	I0914 18:10:17.629910   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.629918   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:17.629925   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:17.629973   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:17.670639   62996 cri.go:89] found id: ""
	I0914 18:10:17.670666   62996 logs.go:276] 0 containers: []
	W0914 18:10:17.670677   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:17.670688   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:17.670700   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:17.725056   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:17.725095   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:17.738236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:17.738267   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:17.812931   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:17.812963   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:17.812978   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:17.896394   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:17.896426   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:18.102598   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.104053   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:21.085272   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:23.583185   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:22.001396   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.500424   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:20.434465   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:20.448801   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:20.448878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:20.482909   62996 cri.go:89] found id: ""
	I0914 18:10:20.482937   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.482949   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:20.482956   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:20.483017   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:20.516865   62996 cri.go:89] found id: ""
	I0914 18:10:20.516888   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.516896   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:20.516902   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:20.516961   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:20.556131   62996 cri.go:89] found id: ""
	I0914 18:10:20.556164   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.556174   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:20.556182   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:20.556246   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:20.594755   62996 cri.go:89] found id: ""
	I0914 18:10:20.594779   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.594787   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:20.594795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:20.594841   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:20.630259   62996 cri.go:89] found id: ""
	I0914 18:10:20.630290   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.630300   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:20.630307   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:20.630379   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:20.667721   62996 cri.go:89] found id: ""
	I0914 18:10:20.667754   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.667763   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:20.667769   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:20.667826   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:20.706358   62996 cri.go:89] found id: ""
	I0914 18:10:20.706387   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.706396   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:20.706401   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:20.706462   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:20.738514   62996 cri.go:89] found id: ""
	I0914 18:10:20.738541   62996 logs.go:276] 0 containers: []
	W0914 18:10:20.738549   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:20.738557   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:20.738576   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:20.775075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:20.775105   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:20.825988   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:20.826026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:20.839157   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:20.839194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:20.915730   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:20.915750   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:20.915762   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.497427   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:23.511559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:23.511633   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:23.546913   62996 cri.go:89] found id: ""
	I0914 18:10:23.546945   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.546958   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:23.546969   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:23.547034   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:23.584438   62996 cri.go:89] found id: ""
	I0914 18:10:23.584457   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.584463   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:23.584469   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:23.584517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:23.618777   62996 cri.go:89] found id: ""
	I0914 18:10:23.618804   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.618812   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:23.618817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:23.618876   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:23.652197   62996 cri.go:89] found id: ""
	I0914 18:10:23.652225   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.652236   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:23.652244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:23.652304   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:23.687678   62996 cri.go:89] found id: ""
	I0914 18:10:23.687712   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.687725   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:23.687733   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:23.687790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:23.720884   62996 cri.go:89] found id: ""
	I0914 18:10:23.720918   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.720929   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:23.720936   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:23.721004   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:23.753335   62996 cri.go:89] found id: ""
	I0914 18:10:23.753365   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.753376   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:23.753384   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:23.753431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:23.787177   62996 cri.go:89] found id: ""
	I0914 18:10:23.787209   62996 logs.go:276] 0 containers: []
	W0914 18:10:23.787230   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:23.787241   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:23.787254   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:23.864763   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:23.864802   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:23.903394   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:23.903424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:23.952696   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:23.952734   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:23.967115   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:23.967142   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:24.035394   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:22.602815   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:24.603230   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.604416   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.082291   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:28.582007   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.501088   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:29.001400   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:26.536361   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:26.550666   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:26.550746   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:26.588940   62996 cri.go:89] found id: ""
	I0914 18:10:26.588974   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.588988   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:26.588997   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:26.589064   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:26.627475   62996 cri.go:89] found id: ""
	I0914 18:10:26.627523   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.627537   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:26.627546   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:26.627619   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:26.664995   62996 cri.go:89] found id: ""
	I0914 18:10:26.665021   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.665029   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:26.665034   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:26.665087   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:26.699195   62996 cri.go:89] found id: ""
	I0914 18:10:26.699223   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.699234   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:26.699241   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:26.699300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:26.735746   62996 cri.go:89] found id: ""
	I0914 18:10:26.735781   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.735790   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:26.735795   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:26.735857   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:26.772220   62996 cri.go:89] found id: ""
	I0914 18:10:26.772251   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.772261   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:26.772270   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:26.772331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:26.808301   62996 cri.go:89] found id: ""
	I0914 18:10:26.808330   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.808339   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:26.808346   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:26.808412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:26.844824   62996 cri.go:89] found id: ""
	I0914 18:10:26.844858   62996 logs.go:276] 0 containers: []
	W0914 18:10:26.844870   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:26.844880   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:26.844916   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:26.899960   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:26.899999   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:26.914413   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:26.914438   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:26.990599   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:26.990620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:26.990632   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:27.067822   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:27.067872   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:29.610959   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:29.625456   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:29.625517   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:29.662963   62996 cri.go:89] found id: ""
	I0914 18:10:29.662990   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.663002   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:29.663009   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:29.663078   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:29.702141   62996 cri.go:89] found id: ""
	I0914 18:10:29.702189   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.702201   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:29.702208   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:29.702265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:29.737559   62996 cri.go:89] found id: ""
	I0914 18:10:29.737584   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.737592   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:29.737598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:29.737644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:29.773544   62996 cri.go:89] found id: ""
	I0914 18:10:29.773570   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.773578   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:29.773586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:29.773639   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:29.815355   62996 cri.go:89] found id: ""
	I0914 18:10:29.815401   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.815414   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:29.815422   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:29.815490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:29.855729   62996 cri.go:89] found id: ""
	I0914 18:10:29.855755   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.855765   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:29.855772   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:29.855835   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:29.894023   62996 cri.go:89] found id: ""
	I0914 18:10:29.894048   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.894056   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:29.894063   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:29.894120   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:29.928873   62996 cri.go:89] found id: ""
	I0914 18:10:29.928900   62996 logs.go:276] 0 containers: []
	W0914 18:10:29.928910   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:29.928921   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:29.928937   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:30.005879   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:30.005904   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:30.005917   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:30.087160   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:30.087196   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:30.126027   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:30.126058   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:30.178901   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:30.178941   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:28.604725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.103833   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:30.582800   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.082884   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:31.001447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:33.501525   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:32.692789   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:32.708884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:32.708942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:32.744684   62996 cri.go:89] found id: ""
	I0914 18:10:32.744711   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.744722   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:32.744729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:32.744789   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:32.778311   62996 cri.go:89] found id: ""
	I0914 18:10:32.778345   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.778355   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:32.778362   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:32.778421   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:32.820122   62996 cri.go:89] found id: ""
	I0914 18:10:32.820150   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.820158   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:32.820163   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:32.820213   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:32.856507   62996 cri.go:89] found id: ""
	I0914 18:10:32.856541   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.856552   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:32.856559   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:32.856622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:32.891891   62996 cri.go:89] found id: ""
	I0914 18:10:32.891922   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.891934   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:32.891942   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:32.892001   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:32.936666   62996 cri.go:89] found id: ""
	I0914 18:10:32.936696   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.936708   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:32.936715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:32.936783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:32.972287   62996 cri.go:89] found id: ""
	I0914 18:10:32.972321   62996 logs.go:276] 0 containers: []
	W0914 18:10:32.972333   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:32.972341   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:32.972406   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:33.028398   62996 cri.go:89] found id: ""
	I0914 18:10:33.028423   62996 logs.go:276] 0 containers: []
	W0914 18:10:33.028430   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:33.028438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:33.028447   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:33.041604   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:33.041631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:33.116278   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:33.116310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:33.116325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:33.194720   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:33.194755   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:33.235741   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:33.235778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:33.603121   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.604573   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.083689   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:37.583721   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:36.000829   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:38.001022   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.002742   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:35.787601   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:35.801819   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:35.801895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:35.837381   62996 cri.go:89] found id: ""
	I0914 18:10:35.837409   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.837417   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:35.837423   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:35.837473   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:35.872876   62996 cri.go:89] found id: ""
	I0914 18:10:35.872907   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.872915   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:35.872921   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:35.872972   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:35.908885   62996 cri.go:89] found id: ""
	I0914 18:10:35.908912   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.908927   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:35.908932   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:35.908991   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:35.943358   62996 cri.go:89] found id: ""
	I0914 18:10:35.943386   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.943395   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:35.943400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:35.943450   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:35.978387   62996 cri.go:89] found id: ""
	I0914 18:10:35.978416   62996 logs.go:276] 0 containers: []
	W0914 18:10:35.978427   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:35.978434   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:35.978486   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:36.012836   62996 cri.go:89] found id: ""
	I0914 18:10:36.012863   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.012874   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:36.012881   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:36.012931   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:36.048243   62996 cri.go:89] found id: ""
	I0914 18:10:36.048272   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.048283   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:36.048290   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:36.048378   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:36.089415   62996 cri.go:89] found id: ""
	I0914 18:10:36.089449   62996 logs.go:276] 0 containers: []
	W0914 18:10:36.089460   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:36.089471   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:36.089484   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:36.141287   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:36.141324   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:36.154418   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:36.154444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:36.228454   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:36.228483   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:36.228500   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:36.302020   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:36.302063   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:38.841946   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:38.855010   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:38.855072   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:38.890835   62996 cri.go:89] found id: ""
	I0914 18:10:38.890867   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.890878   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:38.890886   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:38.890945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:38.924675   62996 cri.go:89] found id: ""
	I0914 18:10:38.924700   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.924708   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:38.924713   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:38.924761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:38.959999   62996 cri.go:89] found id: ""
	I0914 18:10:38.960024   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.960032   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:38.960038   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:38.960097   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:38.995718   62996 cri.go:89] found id: ""
	I0914 18:10:38.995747   62996 logs.go:276] 0 containers: []
	W0914 18:10:38.995755   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:38.995761   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:38.995810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:39.031178   62996 cri.go:89] found id: ""
	I0914 18:10:39.031208   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.031224   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:39.031232   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:39.031292   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:39.065511   62996 cri.go:89] found id: ""
	I0914 18:10:39.065540   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.065560   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:39.065569   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:39.065628   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:39.103625   62996 cri.go:89] found id: ""
	I0914 18:10:39.103655   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.103671   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:39.103678   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:39.103772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:39.140140   62996 cri.go:89] found id: ""
	I0914 18:10:39.140169   62996 logs.go:276] 0 containers: []
	W0914 18:10:39.140179   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:39.140189   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:39.140205   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:39.154953   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:39.154980   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:39.226745   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:39.226778   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:39.226794   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:39.305268   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:39.305310   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:39.345363   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:39.345389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:38.102910   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.103826   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:40.082907   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.083587   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:44.582457   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:42.500851   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.001069   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:41.897635   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:41.910895   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:41.910962   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:41.946302   62996 cri.go:89] found id: ""
	I0914 18:10:41.946327   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.946338   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:41.946345   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:41.946405   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:41.983180   62996 cri.go:89] found id: ""
	I0914 18:10:41.983210   62996 logs.go:276] 0 containers: []
	W0914 18:10:41.983221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:41.983231   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:41.983294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:42.017923   62996 cri.go:89] found id: ""
	I0914 18:10:42.017946   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.017954   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:42.017959   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:42.018006   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:42.052086   62996 cri.go:89] found id: ""
	I0914 18:10:42.052122   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.052133   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:42.052140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:42.052206   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:42.092000   62996 cri.go:89] found id: ""
	I0914 18:10:42.092029   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.092040   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:42.092048   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:42.092114   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:42.130402   62996 cri.go:89] found id: ""
	I0914 18:10:42.130436   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.130447   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:42.130455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:42.130505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:42.166614   62996 cri.go:89] found id: ""
	I0914 18:10:42.166639   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.166647   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:42.166653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:42.166704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:42.199763   62996 cri.go:89] found id: ""
	I0914 18:10:42.199795   62996 logs.go:276] 0 containers: []
	W0914 18:10:42.199808   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:42.199820   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:42.199835   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.251564   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:42.251597   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:42.264771   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:42.264806   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:42.335441   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:42.335465   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:42.335489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:42.417678   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:42.417715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:44.956372   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:44.970643   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:44.970717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:45.011625   62996 cri.go:89] found id: ""
	I0914 18:10:45.011659   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.011671   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:45.011678   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:45.011738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:45.047489   62996 cri.go:89] found id: ""
	I0914 18:10:45.047515   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.047526   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:45.047541   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:45.047610   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:45.084909   62996 cri.go:89] found id: ""
	I0914 18:10:45.084935   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.084957   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:45.084964   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:45.085035   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:45.120074   62996 cri.go:89] found id: ""
	I0914 18:10:45.120104   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.120115   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:45.120123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:45.120181   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:45.164010   62996 cri.go:89] found id: ""
	I0914 18:10:45.164039   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.164050   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:45.164058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:45.164128   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:45.209565   62996 cri.go:89] found id: ""
	I0914 18:10:45.209590   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.209598   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:45.209604   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:45.209651   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:45.265484   62996 cri.go:89] found id: ""
	I0914 18:10:45.265513   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.265521   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:45.265527   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:45.265593   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:45.300671   62996 cri.go:89] found id: ""
	I0914 18:10:45.300700   62996 logs.go:276] 0 containers: []
	W0914 18:10:45.300711   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:45.300722   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:45.300739   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:42.603017   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.104603   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.082010   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:49.082648   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:47.500917   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.001192   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:45.352657   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:45.352699   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:45.366347   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:45.366381   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:45.442993   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:45.443013   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:45.443024   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:45.523475   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:45.523522   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.062222   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:48.075764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:48.075832   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:48.111836   62996 cri.go:89] found id: ""
	I0914 18:10:48.111864   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.111876   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:48.111884   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:48.111942   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:48.144440   62996 cri.go:89] found id: ""
	I0914 18:10:48.144471   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.144483   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:48.144490   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:48.144553   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:48.179694   62996 cri.go:89] found id: ""
	I0914 18:10:48.179724   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.179732   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:48.179738   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:48.179799   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:48.217290   62996 cri.go:89] found id: ""
	I0914 18:10:48.217320   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.217331   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:48.217337   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:48.217384   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:48.252071   62996 cri.go:89] found id: ""
	I0914 18:10:48.252098   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.252105   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:48.252111   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:48.252172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:48.285372   62996 cri.go:89] found id: ""
	I0914 18:10:48.285399   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.285407   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:48.285414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:48.285461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:48.318015   62996 cri.go:89] found id: ""
	I0914 18:10:48.318040   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.318048   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:48.318054   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:48.318099   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:48.350976   62996 cri.go:89] found id: ""
	I0914 18:10:48.351006   62996 logs.go:276] 0 containers: []
	W0914 18:10:48.351018   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:48.351027   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:48.351040   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:48.364707   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:48.364731   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:48.436438   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:48.436472   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:48.436488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:48.517132   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:48.517165   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:48.555153   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:48.555182   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:47.603610   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:50.104612   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.083246   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:53.582120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:52.001273   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:54.001308   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:51.108066   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:51.121176   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:51.121254   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:51.155641   62996 cri.go:89] found id: ""
	I0914 18:10:51.155675   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.155687   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:51.155693   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:51.155744   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:51.189642   62996 cri.go:89] found id: ""
	I0914 18:10:51.189677   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.189691   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:51.189698   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:51.189763   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:51.223337   62996 cri.go:89] found id: ""
	I0914 18:10:51.223365   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.223375   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:51.223383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:51.223446   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:51.259524   62996 cri.go:89] found id: ""
	I0914 18:10:51.259549   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.259557   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:51.259568   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:51.259625   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:51.295307   62996 cri.go:89] found id: ""
	I0914 18:10:51.295336   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.295347   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:51.295354   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:51.295419   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:51.330619   62996 cri.go:89] found id: ""
	I0914 18:10:51.330658   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.330670   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:51.330677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:51.330741   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:51.365146   62996 cri.go:89] found id: ""
	I0914 18:10:51.365178   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.365191   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:51.365200   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:51.365263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:51.403295   62996 cri.go:89] found id: ""
	I0914 18:10:51.403330   62996 logs.go:276] 0 containers: []
	W0914 18:10:51.403342   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:51.403353   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:51.403369   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:51.467426   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:51.467452   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:51.467471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:51.552003   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:51.552037   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:51.591888   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:51.591921   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:51.645437   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:51.645472   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.160542   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:54.173965   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:54.174040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:54.209242   62996 cri.go:89] found id: ""
	I0914 18:10:54.209270   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.209281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:54.209288   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:54.209365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:54.242345   62996 cri.go:89] found id: ""
	I0914 18:10:54.242374   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.242384   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:54.242392   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:54.242453   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:54.278677   62996 cri.go:89] found id: ""
	I0914 18:10:54.278707   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.278718   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:54.278725   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:54.278793   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:54.314802   62996 cri.go:89] found id: ""
	I0914 18:10:54.314831   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.314842   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:54.314849   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:54.314920   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:54.349075   62996 cri.go:89] found id: ""
	I0914 18:10:54.349100   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.349120   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:54.349127   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:54.349189   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:54.382337   62996 cri.go:89] found id: ""
	I0914 18:10:54.382363   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.382371   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:54.382376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:54.382423   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:54.416613   62996 cri.go:89] found id: ""
	I0914 18:10:54.416640   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.416649   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:54.416654   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:54.416701   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:54.449563   62996 cri.go:89] found id: ""
	I0914 18:10:54.449596   62996 logs.go:276] 0 containers: []
	W0914 18:10:54.449606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:54.449617   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:54.449631   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:10:54.487454   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:54.487489   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:54.541679   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:54.541720   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:54.555267   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:54.555299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:54.630280   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:54.630313   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:54.630323   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:52.603604   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.104734   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:55.582258   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.081905   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:56.002210   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:58.499961   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:10:57.215606   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:10:57.228469   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:10:57.228550   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:10:57.260643   62996 cri.go:89] found id: ""
	I0914 18:10:57.260675   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.260684   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:10:57.260690   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:10:57.260750   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:10:57.294125   62996 cri.go:89] found id: ""
	I0914 18:10:57.294174   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.294186   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:10:57.294196   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:10:57.294259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.328078   62996 cri.go:89] found id: ""
	I0914 18:10:57.328101   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.328108   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:10:57.328114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:10:57.328173   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:10:57.362451   62996 cri.go:89] found id: ""
	I0914 18:10:57.362476   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.362483   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:10:57.362489   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:10:57.362556   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:10:57.398273   62996 cri.go:89] found id: ""
	I0914 18:10:57.398298   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.398306   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:10:57.398311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:10:57.398374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:10:57.431112   62996 cri.go:89] found id: ""
	I0914 18:10:57.431137   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.431145   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:10:57.431151   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:10:57.431197   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:10:57.464930   62996 cri.go:89] found id: ""
	I0914 18:10:57.464956   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.464966   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:10:57.464973   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:10:57.465033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:10:57.501233   62996 cri.go:89] found id: ""
	I0914 18:10:57.501263   62996 logs.go:276] 0 containers: []
	W0914 18:10:57.501276   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:10:57.501287   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:10:57.501302   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:10:57.550798   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:10:57.550836   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:10:57.564238   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:10:57.564263   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:10:57.634387   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:10:57.634414   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:10:57.634424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:10:57.714218   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:10:57.714253   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:00.251944   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:00.264817   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:00.264881   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:00.306613   62996 cri.go:89] found id: ""
	I0914 18:11:00.306641   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.306651   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:00.306658   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:00.306717   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:00.340297   62996 cri.go:89] found id: ""
	I0914 18:11:00.340327   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.340338   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:00.340346   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:00.340404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:10:57.604025   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.104193   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.083208   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.582299   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.583803   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.500596   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:02.501405   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:04.501527   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:00.373553   62996 cri.go:89] found id: ""
	I0914 18:11:00.373594   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.373603   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:00.373609   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:00.373657   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:00.407351   62996 cri.go:89] found id: ""
	I0914 18:11:00.407381   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.407392   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:00.407400   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:00.407461   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:00.440976   62996 cri.go:89] found id: ""
	I0914 18:11:00.441005   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.441016   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:00.441024   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:00.441085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:00.478138   62996 cri.go:89] found id: ""
	I0914 18:11:00.478180   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.478193   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:00.478201   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:00.478264   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:00.513861   62996 cri.go:89] found id: ""
	I0914 18:11:00.513885   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.513897   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:00.513905   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:00.513955   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:00.547295   62996 cri.go:89] found id: ""
	I0914 18:11:00.547338   62996 logs.go:276] 0 containers: []
	W0914 18:11:00.547348   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:00.547357   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:00.547367   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:00.598108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:00.598146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:00.611751   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:00.611778   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:00.688767   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:00.688788   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:00.688803   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:00.771892   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:00.771929   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:03.310816   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:03.323773   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:03.323838   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:03.357873   62996 cri.go:89] found id: ""
	I0914 18:11:03.357910   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.357922   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:03.357934   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:03.357995   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:03.394978   62996 cri.go:89] found id: ""
	I0914 18:11:03.395012   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.395024   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:03.395032   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:03.395098   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:03.429699   62996 cri.go:89] found id: ""
	I0914 18:11:03.429725   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.429734   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:03.429740   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:03.429794   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:03.462616   62996 cri.go:89] found id: ""
	I0914 18:11:03.462648   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.462660   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:03.462692   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:03.462759   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:03.496464   62996 cri.go:89] found id: ""
	I0914 18:11:03.496495   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.496506   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:03.496513   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:03.496573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:03.529655   62996 cri.go:89] found id: ""
	I0914 18:11:03.529687   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.529697   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:03.529704   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:03.529767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:03.563025   62996 cri.go:89] found id: ""
	I0914 18:11:03.563055   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.563064   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:03.563069   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:03.563123   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:03.604066   62996 cri.go:89] found id: ""
	I0914 18:11:03.604088   62996 logs.go:276] 0 containers: []
	W0914 18:11:03.604095   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:03.604103   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:03.604114   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:03.656607   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:03.656647   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:03.669974   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:03.670004   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:03.742295   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:03.742324   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:03.742343   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:03.817527   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:03.817566   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:02.602818   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:05.103061   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:07.083161   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.585702   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.999885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.001611   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:06.355023   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:06.368376   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:06.368445   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:06.403876   62996 cri.go:89] found id: ""
	I0914 18:11:06.403904   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.403916   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:06.403924   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:06.403997   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:06.438187   62996 cri.go:89] found id: ""
	I0914 18:11:06.438217   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.438229   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:06.438236   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:06.438302   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:06.477599   62996 cri.go:89] found id: ""
	I0914 18:11:06.477628   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.477639   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:06.477646   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:06.477718   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:06.514878   62996 cri.go:89] found id: ""
	I0914 18:11:06.514905   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.514914   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:06.514920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:06.514979   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:06.552228   62996 cri.go:89] found id: ""
	I0914 18:11:06.552260   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.552272   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:06.552279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:06.552346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:06.594600   62996 cri.go:89] found id: ""
	I0914 18:11:06.594630   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.594641   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:06.594649   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:06.594713   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:06.630977   62996 cri.go:89] found id: ""
	I0914 18:11:06.631017   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.631029   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:06.631036   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:06.631095   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:06.666717   62996 cri.go:89] found id: ""
	I0914 18:11:06.666749   62996 logs.go:276] 0 containers: []
	W0914 18:11:06.666760   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:06.666771   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:06.666784   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:06.720438   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:06.720474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:06.734264   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:06.734299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:06.802999   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:06.803020   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:06.803039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:06.881422   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:06.881462   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.420948   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:09.435498   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:09.435582   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:09.470441   62996 cri.go:89] found id: ""
	I0914 18:11:09.470473   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.470485   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:09.470493   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:09.470568   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:09.506101   62996 cri.go:89] found id: ""
	I0914 18:11:09.506124   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.506142   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:09.506147   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:09.506227   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:09.541518   62996 cri.go:89] found id: ""
	I0914 18:11:09.541545   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.541553   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:09.541558   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:09.541618   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:09.582697   62996 cri.go:89] found id: ""
	I0914 18:11:09.582725   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.582735   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:09.582743   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:09.582805   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:09.621060   62996 cri.go:89] found id: ""
	I0914 18:11:09.621088   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.621097   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:09.621102   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:09.621161   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:09.657967   62996 cri.go:89] found id: ""
	I0914 18:11:09.657994   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.658003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:09.658008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:09.658060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:09.693397   62996 cri.go:89] found id: ""
	I0914 18:11:09.693432   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.693444   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:09.693451   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:09.693505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:09.730819   62996 cri.go:89] found id: ""
	I0914 18:11:09.730850   62996 logs.go:276] 0 containers: []
	W0914 18:11:09.730860   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:09.730871   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:09.730887   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:09.745106   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:09.745146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:09.817032   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:09.817059   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:09.817085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:09.897335   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:09.897383   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:09.939036   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:09.939081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:07.603634   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:09.605513   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.082145   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.082616   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:11.500951   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.001238   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:12.493075   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:12.506832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:12.506889   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:12.545417   62996 cri.go:89] found id: ""
	I0914 18:11:12.545448   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.545456   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:12.545464   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:12.545516   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:12.580346   62996 cri.go:89] found id: ""
	I0914 18:11:12.580379   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.580389   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:12.580397   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:12.580457   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:12.616540   62996 cri.go:89] found id: ""
	I0914 18:11:12.616570   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.616577   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:12.616586   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:12.616644   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:12.649673   62996 cri.go:89] found id: ""
	I0914 18:11:12.649700   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.649709   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:12.649714   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:12.649767   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:12.683840   62996 cri.go:89] found id: ""
	I0914 18:11:12.683868   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.683879   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:12.683886   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:12.683946   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:12.716862   62996 cri.go:89] found id: ""
	I0914 18:11:12.716889   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.716897   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:12.716903   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:12.716952   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:12.751364   62996 cri.go:89] found id: ""
	I0914 18:11:12.751395   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.751406   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:12.751414   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:12.751471   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:12.786425   62996 cri.go:89] found id: ""
	I0914 18:11:12.786457   62996 logs.go:276] 0 containers: []
	W0914 18:11:12.786468   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:12.786477   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:12.786487   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:12.853890   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:12.853920   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:12.853936   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:12.938058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:12.938107   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:12.985406   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:12.985441   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:13.039040   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:13.039077   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:12.103165   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:14.103338   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.103440   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.083173   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.582225   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:16.001344   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:18.501001   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:15.554110   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:15.567977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:15.568051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:15.604851   62996 cri.go:89] found id: ""
	I0914 18:11:15.604879   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.604887   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:15.604892   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:15.604945   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:15.641180   62996 cri.go:89] found id: ""
	I0914 18:11:15.641209   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.641221   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:15.641229   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:15.641324   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:15.680284   62996 cri.go:89] found id: ""
	I0914 18:11:15.680310   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.680327   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:15.680334   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:15.680395   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:15.718118   62996 cri.go:89] found id: ""
	I0914 18:11:15.718152   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.718173   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:15.718181   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:15.718237   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:15.753998   62996 cri.go:89] found id: ""
	I0914 18:11:15.754020   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.754028   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:15.754033   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:15.754081   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:15.790026   62996 cri.go:89] found id: ""
	I0914 18:11:15.790066   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.790084   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:15.790093   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:15.790179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:15.828050   62996 cri.go:89] found id: ""
	I0914 18:11:15.828078   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.828086   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:15.828094   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:15.828162   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:15.861289   62996 cri.go:89] found id: ""
	I0914 18:11:15.861322   62996 logs.go:276] 0 containers: []
	W0914 18:11:15.861330   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:15.861338   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:15.861348   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:15.875023   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:15.875054   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:15.943002   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:15.943025   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:15.943038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:16.027747   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:16.027785   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:16.067097   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:16.067133   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:18.621376   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:18.634005   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:18.634093   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:18.667089   62996 cri.go:89] found id: ""
	I0914 18:11:18.667118   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.667127   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:18.667132   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:18.667184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:18.700518   62996 cri.go:89] found id: ""
	I0914 18:11:18.700547   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.700563   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:18.700571   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:18.700643   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:18.733724   62996 cri.go:89] found id: ""
	I0914 18:11:18.733755   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.733767   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:18.733778   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:18.733840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:18.768696   62996 cri.go:89] found id: ""
	I0914 18:11:18.768739   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.768750   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:18.768757   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:18.768816   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:18.803603   62996 cri.go:89] found id: ""
	I0914 18:11:18.803636   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.803647   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:18.803653   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:18.803707   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:18.837019   62996 cri.go:89] found id: ""
	I0914 18:11:18.837044   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.837052   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:18.837058   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:18.837107   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:18.871470   62996 cri.go:89] found id: ""
	I0914 18:11:18.871496   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.871504   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:18.871515   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:18.871573   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:18.904439   62996 cri.go:89] found id: ""
	I0914 18:11:18.904474   62996 logs.go:276] 0 containers: []
	W0914 18:11:18.904485   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:18.904494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:18.904504   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:18.978025   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:18.978065   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:19.031667   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:19.031709   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:19.083360   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:19.083398   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:19.097770   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:19.097796   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:19.167712   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:18.603529   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.607347   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.583176   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.082414   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:20.501464   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:23.000161   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.000597   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:21.668470   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:21.681917   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:21.681994   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:21.717243   62996 cri.go:89] found id: ""
	I0914 18:11:21.717272   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.717281   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:21.717286   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:21.717341   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:21.748801   62996 cri.go:89] found id: ""
	I0914 18:11:21.748853   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.748863   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:21.748871   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:21.748930   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:21.785146   62996 cri.go:89] found id: ""
	I0914 18:11:21.785171   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.785180   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:21.785185   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:21.785242   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:21.819949   62996 cri.go:89] found id: ""
	I0914 18:11:21.819977   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.819984   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:21.819990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:21.820039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:21.852418   62996 cri.go:89] found id: ""
	I0914 18:11:21.852451   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.852461   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:21.852468   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:21.852535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:21.890170   62996 cri.go:89] found id: ""
	I0914 18:11:21.890205   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.890216   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:21.890223   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:21.890283   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:21.924386   62996 cri.go:89] found id: ""
	I0914 18:11:21.924420   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.924432   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:21.924439   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:21.924505   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:21.960302   62996 cri.go:89] found id: ""
	I0914 18:11:21.960328   62996 logs.go:276] 0 containers: []
	W0914 18:11:21.960337   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:21.960346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:21.960360   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:22.038804   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:22.038839   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:22.082411   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:22.082444   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:22.134306   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:22.134339   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:22.147891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:22.147919   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:22.216582   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:24.716879   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:24.729436   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:24.729506   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:24.782796   62996 cri.go:89] found id: ""
	I0914 18:11:24.782822   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.782833   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:24.782842   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:24.782897   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:24.819075   62996 cri.go:89] found id: ""
	I0914 18:11:24.819101   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.819108   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:24.819113   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:24.819157   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:24.852976   62996 cri.go:89] found id: ""
	I0914 18:11:24.853003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.853013   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:24.853020   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:24.853083   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:24.888010   62996 cri.go:89] found id: ""
	I0914 18:11:24.888041   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.888053   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:24.888061   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:24.888140   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:24.923467   62996 cri.go:89] found id: ""
	I0914 18:11:24.923500   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.923514   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:24.923522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:24.923575   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:24.961976   62996 cri.go:89] found id: ""
	I0914 18:11:24.962003   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.962011   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:24.962018   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:24.962079   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:24.995831   62996 cri.go:89] found id: ""
	I0914 18:11:24.995854   62996 logs.go:276] 0 containers: []
	W0914 18:11:24.995862   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:24.995868   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:24.995929   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:25.034793   62996 cri.go:89] found id: ""
	I0914 18:11:25.034822   62996 logs.go:276] 0 containers: []
	W0914 18:11:25.034832   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:25.034840   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:25.034855   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:25.048500   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:25.048531   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:25.120313   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:25.120346   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:25.120361   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:25.200361   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:25.200395   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:25.238898   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:25.238928   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:23.103266   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.104091   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:25.082804   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.582345   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.582482   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.001813   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.500751   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:27.791098   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:27.803729   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:27.803785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:27.840688   62996 cri.go:89] found id: ""
	I0914 18:11:27.840711   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.840719   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:27.840725   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:27.840775   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:27.874108   62996 cri.go:89] found id: ""
	I0914 18:11:27.874140   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.874151   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:27.874176   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:27.874241   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:27.909352   62996 cri.go:89] found id: ""
	I0914 18:11:27.909392   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.909403   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:27.909410   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:27.909460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:27.942751   62996 cri.go:89] found id: ""
	I0914 18:11:27.942777   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.942786   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:27.942791   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:27.942852   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:27.977714   62996 cri.go:89] found id: ""
	I0914 18:11:27.977745   62996 logs.go:276] 0 containers: []
	W0914 18:11:27.977756   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:27.977764   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:27.977830   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:28.013681   62996 cri.go:89] found id: ""
	I0914 18:11:28.013711   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.013722   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:28.013730   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:28.013791   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:28.047112   62996 cri.go:89] found id: ""
	I0914 18:11:28.047138   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.047146   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:28.047152   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:28.047199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:28.084290   62996 cri.go:89] found id: ""
	I0914 18:11:28.084317   62996 logs.go:276] 0 containers: []
	W0914 18:11:28.084331   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:28.084340   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:28.084351   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:28.097720   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:28.097756   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:28.172054   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:28.172074   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:28.172085   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:28.253611   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:28.253644   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:28.289904   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:28.289938   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:27.105655   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:29.602893   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:32.082229   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.082649   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:31.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.001997   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:30.839215   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:30.851580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:30.851654   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:30.891232   62996 cri.go:89] found id: ""
	I0914 18:11:30.891261   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.891272   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:30.891279   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:30.891346   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:30.930144   62996 cri.go:89] found id: ""
	I0914 18:11:30.930187   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.930197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:30.930204   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:30.930265   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:30.965034   62996 cri.go:89] found id: ""
	I0914 18:11:30.965068   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.965080   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:30.965087   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:30.965150   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:30.998927   62996 cri.go:89] found id: ""
	I0914 18:11:30.998955   62996 logs.go:276] 0 containers: []
	W0914 18:11:30.998966   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:30.998974   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:30.999039   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:31.033789   62996 cri.go:89] found id: ""
	I0914 18:11:31.033820   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.033830   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:31.033838   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:31.033892   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:31.068988   62996 cri.go:89] found id: ""
	I0914 18:11:31.069020   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.069029   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:31.069035   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:31.069085   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:31.105904   62996 cri.go:89] found id: ""
	I0914 18:11:31.105932   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.105944   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:31.105951   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:31.106018   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:31.147560   62996 cri.go:89] found id: ""
	I0914 18:11:31.147593   62996 logs.go:276] 0 containers: []
	W0914 18:11:31.147606   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:31.147618   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:31.147633   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:31.237347   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:31.237373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:31.237389   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:31.322978   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:31.323012   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:31.361464   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:31.361495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:31.417255   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:31.417299   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:33.930962   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:33.944431   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:33.944514   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:33.979727   62996 cri.go:89] found id: ""
	I0914 18:11:33.979761   62996 logs.go:276] 0 containers: []
	W0914 18:11:33.979772   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:33.979779   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:33.979840   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:34.015069   62996 cri.go:89] found id: ""
	I0914 18:11:34.015100   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.015111   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:34.015117   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:34.015168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:34.049230   62996 cri.go:89] found id: ""
	I0914 18:11:34.049262   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.049274   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:34.049282   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:34.049345   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:34.086175   62996 cri.go:89] found id: ""
	I0914 18:11:34.086205   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.086216   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:34.086225   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:34.086286   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:34.123534   62996 cri.go:89] found id: ""
	I0914 18:11:34.123563   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.123573   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:34.123581   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:34.123645   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:34.160782   62996 cri.go:89] found id: ""
	I0914 18:11:34.160812   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.160822   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:34.160830   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:34.160891   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:34.193240   62996 cri.go:89] found id: ""
	I0914 18:11:34.193264   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.193272   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:34.193278   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:34.193336   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:34.232788   62996 cri.go:89] found id: ""
	I0914 18:11:34.232816   62996 logs.go:276] 0 containers: []
	W0914 18:11:34.232827   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:34.232838   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:34.232851   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:34.284953   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:34.284993   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:34.299462   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:34.299491   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:34.370596   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:34.370623   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:34.370638   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:34.450082   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:34.450118   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:32.103194   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:34.103615   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.603139   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.083120   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.582197   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.500663   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:38.501005   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:36.991625   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:37.009170   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:37.009229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:37.044035   62996 cri.go:89] found id: ""
	I0914 18:11:37.044058   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.044066   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:37.044072   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:37.044126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:37.076288   62996 cri.go:89] found id: ""
	I0914 18:11:37.076318   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.076328   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:37.076336   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:37.076399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:37.110509   62996 cri.go:89] found id: ""
	I0914 18:11:37.110533   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.110541   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:37.110553   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:37.110603   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:37.143688   62996 cri.go:89] found id: ""
	I0914 18:11:37.143713   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.143721   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:37.143726   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:37.143781   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:37.180802   62996 cri.go:89] found id: ""
	I0914 18:11:37.180828   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.180839   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:37.180846   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:37.180907   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:37.214590   62996 cri.go:89] found id: ""
	I0914 18:11:37.214615   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.214623   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:37.214628   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:37.214674   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:37.246039   62996 cri.go:89] found id: ""
	I0914 18:11:37.246067   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.246078   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:37.246085   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:37.246152   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:37.278258   62996 cri.go:89] found id: ""
	I0914 18:11:37.278299   62996 logs.go:276] 0 containers: []
	W0914 18:11:37.278307   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:37.278315   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:37.278325   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:37.315788   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:37.315817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:37.367286   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:37.367322   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:37.380863   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:37.380894   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:37.447925   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:37.447948   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:37.447959   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.025419   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:40.038279   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:40.038361   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:40.072986   62996 cri.go:89] found id: ""
	I0914 18:11:40.073021   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.073033   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:40.073041   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:40.073102   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:40.107636   62996 cri.go:89] found id: ""
	I0914 18:11:40.107657   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.107665   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:40.107670   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:40.107723   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:40.145308   62996 cri.go:89] found id: ""
	I0914 18:11:40.145347   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.145359   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:40.145366   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:40.145412   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:40.182409   62996 cri.go:89] found id: ""
	I0914 18:11:40.182439   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.182449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:40.182457   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:40.182522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:40.217621   62996 cri.go:89] found id: ""
	I0914 18:11:40.217655   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.217667   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:40.217675   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:40.217738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:40.253159   62996 cri.go:89] found id: ""
	I0914 18:11:40.253186   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.253197   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:40.253205   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:40.253263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:40.286808   62996 cri.go:89] found id: ""
	I0914 18:11:40.286838   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.286847   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:40.286855   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:40.286910   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:40.324265   62996 cri.go:89] found id: ""
	I0914 18:11:40.324292   62996 logs.go:276] 0 containers: []
	W0914 18:11:40.324299   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:40.324307   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:40.324318   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:38.603823   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:41.102313   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.583132   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.082387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.501996   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:43.000447   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:40.376962   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:40.376996   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:40.390564   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:40.390594   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:40.460934   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:40.460956   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:40.460967   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:40.537058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:40.537099   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.075401   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:43.088488   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:43.088559   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:43.122777   62996 cri.go:89] found id: ""
	I0914 18:11:43.122802   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.122811   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:43.122818   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:43.122878   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:43.155343   62996 cri.go:89] found id: ""
	I0914 18:11:43.155369   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.155378   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:43.155383   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:43.155443   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:43.190350   62996 cri.go:89] found id: ""
	I0914 18:11:43.190379   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.190390   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:43.190398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:43.190460   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:43.222930   62996 cri.go:89] found id: ""
	I0914 18:11:43.222961   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.222972   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:43.222979   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:43.223042   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:43.256931   62996 cri.go:89] found id: ""
	I0914 18:11:43.256959   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.256971   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:43.256977   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:43.257044   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:43.287691   62996 cri.go:89] found id: ""
	I0914 18:11:43.287720   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.287729   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:43.287734   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:43.287790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:43.320633   62996 cri.go:89] found id: ""
	I0914 18:11:43.320658   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.320666   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:43.320677   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:43.320738   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:43.354230   62996 cri.go:89] found id: ""
	I0914 18:11:43.354269   62996 logs.go:276] 0 containers: []
	W0914 18:11:43.354280   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:43.354291   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:43.354304   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:43.429256   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:43.429293   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:43.467929   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:43.467957   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:43.521266   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:43.521305   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:43.536471   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:43.536511   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:43.607588   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:43.103756   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.082762   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.582353   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:49.584026   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:45.500451   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:47.501831   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.001778   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:46.108756   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:46.121231   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:46.121314   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:46.156499   62996 cri.go:89] found id: ""
	I0914 18:11:46.156528   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.156537   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:46.156543   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:46.156591   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:46.192161   62996 cri.go:89] found id: ""
	I0914 18:11:46.192188   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.192197   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:46.192203   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:46.192263   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:46.222784   62996 cri.go:89] found id: ""
	I0914 18:11:46.222816   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.222826   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:46.222834   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:46.222894   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:46.261551   62996 cri.go:89] found id: ""
	I0914 18:11:46.261577   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.261587   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:46.261594   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:46.261659   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:46.298263   62996 cri.go:89] found id: ""
	I0914 18:11:46.298293   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.298303   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:46.298311   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:46.298387   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:46.333477   62996 cri.go:89] found id: ""
	I0914 18:11:46.333502   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.333510   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:46.333516   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:46.333581   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:46.367975   62996 cri.go:89] found id: ""
	I0914 18:11:46.367998   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.368005   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:46.368011   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:46.368063   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:46.402252   62996 cri.go:89] found id: ""
	I0914 18:11:46.402281   62996 logs.go:276] 0 containers: []
	W0914 18:11:46.402293   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:46.402310   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:46.402329   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:46.477212   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:46.477252   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:46.515542   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:46.515568   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:46.570108   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:46.570146   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:46.585989   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:46.586019   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:46.658769   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.159920   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:49.172748   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:49.172810   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:49.213555   62996 cri.go:89] found id: ""
	I0914 18:11:49.213585   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.213595   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:49.213601   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:49.213660   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:49.246022   62996 cri.go:89] found id: ""
	I0914 18:11:49.246050   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.246061   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:49.246068   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:49.246132   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:49.279131   62996 cri.go:89] found id: ""
	I0914 18:11:49.279157   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.279167   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:49.279175   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:49.279236   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:49.313159   62996 cri.go:89] found id: ""
	I0914 18:11:49.313187   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.313199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:49.313207   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:49.313272   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:49.347837   62996 cri.go:89] found id: ""
	I0914 18:11:49.347861   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.347870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:49.347875   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:49.347932   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:49.381478   62996 cri.go:89] found id: ""
	I0914 18:11:49.381507   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.381516   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:49.381522   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:49.381577   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:49.417197   62996 cri.go:89] found id: ""
	I0914 18:11:49.417224   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.417238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:49.417244   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:49.417313   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:49.450806   62996 cri.go:89] found id: ""
	I0914 18:11:49.450843   62996 logs.go:276] 0 containers: []
	W0914 18:11:49.450857   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:49.450870   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:49.450889   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:49.519573   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:49.519620   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:49.519639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:49.595525   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:49.595565   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:49.633229   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:49.633259   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:49.688667   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:49.688710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:47.605117   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:50.103023   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.082751   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.582016   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.501977   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.000564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:52.206555   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:52.218920   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:52.218996   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:52.253986   62996 cri.go:89] found id: ""
	I0914 18:11:52.254010   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.254018   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:52.254023   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:52.254070   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.286590   62996 cri.go:89] found id: ""
	I0914 18:11:52.286618   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.286629   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:52.286636   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:52.286698   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:52.325419   62996 cri.go:89] found id: ""
	I0914 18:11:52.325454   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.325464   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:52.325471   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:52.325533   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:52.363050   62996 cri.go:89] found id: ""
	I0914 18:11:52.363079   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.363091   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:52.363098   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:52.363160   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:52.400107   62996 cri.go:89] found id: ""
	I0914 18:11:52.400142   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.400153   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:52.400162   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:52.400229   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:52.435711   62996 cri.go:89] found id: ""
	I0914 18:11:52.435735   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.435744   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:52.435752   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:52.435806   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:52.470761   62996 cri.go:89] found id: ""
	I0914 18:11:52.470789   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.470800   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:52.470808   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:52.470875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:52.505680   62996 cri.go:89] found id: ""
	I0914 18:11:52.505705   62996 logs.go:276] 0 containers: []
	W0914 18:11:52.505714   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:52.505725   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:52.505745   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:52.557577   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:52.557616   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:52.571785   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:52.571817   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:52.639759   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:52.639790   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:52.639805   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:52.727022   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:52.727072   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:55.266381   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:55.279300   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:55.279376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:55.315414   62996 cri.go:89] found id: ""
	I0914 18:11:55.315455   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.315463   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:55.315472   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:55.315539   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:52.603110   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:54.603267   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:56.582121   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:58.583277   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:57.001624   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.501328   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:55.350153   62996 cri.go:89] found id: ""
	I0914 18:11:55.350203   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.350213   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:55.350218   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:55.350296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:55.387403   62996 cri.go:89] found id: ""
	I0914 18:11:55.387437   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.387459   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:55.387467   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:55.387522   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:55.424532   62996 cri.go:89] found id: ""
	I0914 18:11:55.424558   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.424566   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:55.424575   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:55.424664   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:55.462423   62996 cri.go:89] found id: ""
	I0914 18:11:55.462458   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.462468   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:55.462475   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:55.462536   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:55.496865   62996 cri.go:89] found id: ""
	I0914 18:11:55.496900   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.496911   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:55.496921   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:55.496986   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:55.531524   62996 cri.go:89] found id: ""
	I0914 18:11:55.531566   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.531577   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:55.531598   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:55.531663   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:55.566579   62996 cri.go:89] found id: ""
	I0914 18:11:55.566606   62996 logs.go:276] 0 containers: []
	W0914 18:11:55.566615   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:55.566623   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:55.566635   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:55.621074   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:55.621122   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:55.635805   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:55.635832   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:55.702346   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:55.702373   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:55.702387   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:55.778589   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:55.778639   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.317118   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:11:58.330312   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:11:58.330382   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:11:58.363550   62996 cri.go:89] found id: ""
	I0914 18:11:58.363587   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.363598   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:11:58.363606   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:11:58.363669   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:11:58.397152   62996 cri.go:89] found id: ""
	I0914 18:11:58.397183   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.397194   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:11:58.397201   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:11:58.397259   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:11:58.435076   62996 cri.go:89] found id: ""
	I0914 18:11:58.435102   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.435111   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:11:58.435116   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:11:58.435184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:11:58.471455   62996 cri.go:89] found id: ""
	I0914 18:11:58.471479   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.471487   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:11:58.471493   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:11:58.471551   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:11:58.504545   62996 cri.go:89] found id: ""
	I0914 18:11:58.504586   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.504596   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:11:58.504603   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:11:58.504662   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:11:58.539335   62996 cri.go:89] found id: ""
	I0914 18:11:58.539362   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.539376   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:11:58.539383   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:11:58.539431   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:11:58.579707   62996 cri.go:89] found id: ""
	I0914 18:11:58.579737   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.579747   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:11:58.579755   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:11:58.579814   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:11:58.614227   62996 cri.go:89] found id: ""
	I0914 18:11:58.614250   62996 logs.go:276] 0 containers: []
	W0914 18:11:58.614259   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:11:58.614266   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:11:58.614279   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:11:58.699846   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:11:58.699888   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:11:58.738513   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:11:58.738542   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:11:58.787858   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:11:58.787895   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:11:58.801103   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:11:58.801137   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:11:58.868291   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:11:57.102934   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:11:59.103345   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.604125   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.083045   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:03.582885   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.501890   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:04.001023   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:01.368810   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:01.381287   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:01.381359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:01.414556   62996 cri.go:89] found id: ""
	I0914 18:12:01.414587   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.414599   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:01.414611   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:01.414661   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:01.447765   62996 cri.go:89] found id: ""
	I0914 18:12:01.447795   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.447806   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:01.447813   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:01.447875   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:01.481012   62996 cri.go:89] found id: ""
	I0914 18:12:01.481045   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.481057   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:01.481065   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:01.481126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:01.516999   62996 cri.go:89] found id: ""
	I0914 18:12:01.517024   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.517031   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:01.517037   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:01.517088   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:01.555520   62996 cri.go:89] found id: ""
	I0914 18:12:01.555548   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.555559   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:01.555566   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:01.555642   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:01.589581   62996 cri.go:89] found id: ""
	I0914 18:12:01.589606   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.589616   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:01.589624   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:01.589691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:01.623955   62996 cri.go:89] found id: ""
	I0914 18:12:01.623983   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.623995   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:01.624002   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:01.624067   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:01.659136   62996 cri.go:89] found id: ""
	I0914 18:12:01.659166   62996 logs.go:276] 0 containers: []
	W0914 18:12:01.659177   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:01.659187   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:01.659206   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:01.711812   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:01.711849   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:01.724934   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:01.724968   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:01.793052   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:01.793079   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:01.793091   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:01.866761   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:01.866799   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.406435   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:04.419756   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:04.419818   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:04.456593   62996 cri.go:89] found id: ""
	I0914 18:12:04.456621   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.456632   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:04.456639   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:04.456689   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:04.489281   62996 cri.go:89] found id: ""
	I0914 18:12:04.489314   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.489326   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:04.489333   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:04.489399   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:04.525353   62996 cri.go:89] found id: ""
	I0914 18:12:04.525381   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.525391   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:04.525398   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:04.525464   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:04.558495   62996 cri.go:89] found id: ""
	I0914 18:12:04.558520   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.558531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:04.558539   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:04.558598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:04.594815   62996 cri.go:89] found id: ""
	I0914 18:12:04.594837   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.594845   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:04.594851   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:04.594899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:04.630198   62996 cri.go:89] found id: ""
	I0914 18:12:04.630224   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.630232   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:04.630238   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:04.630294   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:04.665328   62996 cri.go:89] found id: ""
	I0914 18:12:04.665358   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.665368   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:04.665373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:04.665432   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:04.699778   62996 cri.go:89] found id: ""
	I0914 18:12:04.699801   62996 logs.go:276] 0 containers: []
	W0914 18:12:04.699809   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:04.699816   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:04.699877   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:04.750978   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:04.751022   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:04.764968   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:04.764998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:04.839464   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:04.839494   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:04.839509   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:04.917939   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:04.917979   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:04.103388   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.103725   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.083003   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.581415   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:06.002052   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:08.500393   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:07.459389   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:07.472630   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:07.472691   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:07.507993   62996 cri.go:89] found id: ""
	I0914 18:12:07.508029   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.508040   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:07.508047   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:07.508110   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:07.541083   62996 cri.go:89] found id: ""
	I0914 18:12:07.541108   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.541116   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:07.541121   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:07.541184   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:07.574973   62996 cri.go:89] found id: ""
	I0914 18:12:07.574995   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.575003   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:07.575008   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:07.575052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:07.610166   62996 cri.go:89] found id: ""
	I0914 18:12:07.610189   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.610196   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:07.610202   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:07.610247   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:07.643090   62996 cri.go:89] found id: ""
	I0914 18:12:07.643118   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.643129   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:07.643140   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:07.643201   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:07.676788   62996 cri.go:89] found id: ""
	I0914 18:12:07.676814   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.676825   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:07.676832   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:07.676895   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:07.714122   62996 cri.go:89] found id: ""
	I0914 18:12:07.714147   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.714173   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:07.714179   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:07.714226   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:07.748168   62996 cri.go:89] found id: ""
	I0914 18:12:07.748193   62996 logs.go:276] 0 containers: []
	W0914 18:12:07.748204   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:07.748214   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:07.748230   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:07.784739   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:07.784766   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:07.833431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:07.833467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:07.846072   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:07.846100   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:07.912540   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:07.912560   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:07.912584   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:08.602880   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.604231   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.582647   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.082818   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.500953   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:13.001310   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:10.488543   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:10.502119   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:10.502203   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:10.535390   62996 cri.go:89] found id: ""
	I0914 18:12:10.535420   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.535429   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:10.535435   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:10.535487   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:10.572013   62996 cri.go:89] found id: ""
	I0914 18:12:10.572044   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.572052   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:10.572057   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:10.572105   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:10.613597   62996 cri.go:89] found id: ""
	I0914 18:12:10.613621   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.613628   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:10.613634   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:10.613693   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:10.646086   62996 cri.go:89] found id: ""
	I0914 18:12:10.646116   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.646127   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:10.646134   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:10.646219   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:10.679228   62996 cri.go:89] found id: ""
	I0914 18:12:10.679261   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.679273   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:10.679281   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:10.679340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:10.713321   62996 cri.go:89] found id: ""
	I0914 18:12:10.713350   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.713359   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:10.713365   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:10.713413   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:10.757767   62996 cri.go:89] found id: ""
	I0914 18:12:10.757794   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.757802   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:10.757809   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:10.757854   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:10.797709   62996 cri.go:89] found id: ""
	I0914 18:12:10.797731   62996 logs.go:276] 0 containers: []
	W0914 18:12:10.797739   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:10.797747   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:10.797757   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:10.848431   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:10.848474   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:10.862205   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:10.862239   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:10.935215   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:10.935242   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:10.935260   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:11.019021   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:11.019056   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.560773   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:13.574835   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:13.574899   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:13.613543   62996 cri.go:89] found id: ""
	I0914 18:12:13.613569   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.613582   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:13.613587   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:13.613646   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:13.650721   62996 cri.go:89] found id: ""
	I0914 18:12:13.650755   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.650767   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:13.650775   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:13.650836   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:13.684269   62996 cri.go:89] found id: ""
	I0914 18:12:13.684299   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.684310   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:13.684317   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:13.684376   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:13.726440   62996 cri.go:89] found id: ""
	I0914 18:12:13.726474   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.726486   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:13.726503   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:13.726567   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:13.760835   62996 cri.go:89] found id: ""
	I0914 18:12:13.760865   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.760876   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:13.760884   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:13.760957   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:13.801341   62996 cri.go:89] found id: ""
	I0914 18:12:13.801375   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.801386   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:13.801394   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:13.801456   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:13.834307   62996 cri.go:89] found id: ""
	I0914 18:12:13.834332   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.834350   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:13.834357   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:13.834439   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:13.868838   62996 cri.go:89] found id: ""
	I0914 18:12:13.868871   62996 logs.go:276] 0 containers: []
	W0914 18:12:13.868880   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:13.868889   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:13.868900   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:13.919867   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:13.919906   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:13.933383   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:13.933423   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:14.010559   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:14.010592   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:14.010606   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:14.087876   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:14.087913   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:13.103254   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.103641   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.083238   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.582387   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:15.501029   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:17.505028   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.001929   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:16.630473   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:16.643114   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:16.643196   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:16.680922   62996 cri.go:89] found id: ""
	I0914 18:12:16.680954   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.680962   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:16.680968   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:16.681015   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:16.715549   62996 cri.go:89] found id: ""
	I0914 18:12:16.715582   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.715592   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:16.715598   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:16.715666   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:16.753928   62996 cri.go:89] found id: ""
	I0914 18:12:16.753951   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.753962   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:16.753969   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:16.754033   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:16.787677   62996 cri.go:89] found id: ""
	I0914 18:12:16.787705   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.787716   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:16.787723   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:16.787776   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:16.823638   62996 cri.go:89] found id: ""
	I0914 18:12:16.823667   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.823678   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:16.823686   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:16.823748   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:16.860204   62996 cri.go:89] found id: ""
	I0914 18:12:16.860238   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.860249   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:16.860257   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:16.860329   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:16.898802   62996 cri.go:89] found id: ""
	I0914 18:12:16.898827   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.898837   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:16.898854   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:16.898941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:16.932719   62996 cri.go:89] found id: ""
	I0914 18:12:16.932745   62996 logs.go:276] 0 containers: []
	W0914 18:12:16.932753   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:16.932762   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:16.932779   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:16.986217   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:16.986257   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:17.003243   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:17.003278   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:17.071374   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:17.071397   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:17.071409   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:17.152058   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:17.152112   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:19.717782   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:19.731122   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:19.731199   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:19.769042   62996 cri.go:89] found id: ""
	I0914 18:12:19.769070   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.769079   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:19.769084   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:19.769154   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:19.804666   62996 cri.go:89] found id: ""
	I0914 18:12:19.804691   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.804698   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:19.804704   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:19.804761   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:19.838705   62996 cri.go:89] found id: ""
	I0914 18:12:19.838729   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.838738   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:19.838744   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:19.838790   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:19.873412   62996 cri.go:89] found id: ""
	I0914 18:12:19.873441   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.873449   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:19.873455   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:19.873535   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:19.917706   62996 cri.go:89] found id: ""
	I0914 18:12:19.917734   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.917746   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:19.917754   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:19.917813   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:19.956149   62996 cri.go:89] found id: ""
	I0914 18:12:19.956177   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.956188   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:19.956196   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:19.956255   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:19.988903   62996 cri.go:89] found id: ""
	I0914 18:12:19.988926   62996 logs.go:276] 0 containers: []
	W0914 18:12:19.988934   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:19.988939   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:19.988988   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:20.023785   62996 cri.go:89] found id: ""
	I0914 18:12:20.023814   62996 logs.go:276] 0 containers: []
	W0914 18:12:20.023823   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:20.023833   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:20.023846   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:20.036891   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:20.036918   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:20.112397   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:20.112422   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:20.112437   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:20.195767   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:20.195801   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:20.235439   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:20.235467   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:17.103996   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:19.603109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:21.603150   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:20.083547   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.586009   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.002367   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:24.500394   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:22.784765   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:22.799193   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:22.799267   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:22.840939   62996 cri.go:89] found id: ""
	I0914 18:12:22.840974   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.840983   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:22.840990   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:22.841051   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:22.878920   62996 cri.go:89] found id: ""
	I0914 18:12:22.878951   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.878962   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:22.878970   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:22.879021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:22.926127   62996 cri.go:89] found id: ""
	I0914 18:12:22.926175   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.926187   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:22.926195   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:22.926250   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:22.972041   62996 cri.go:89] found id: ""
	I0914 18:12:22.972068   62996 logs.go:276] 0 containers: []
	W0914 18:12:22.972076   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:22.972082   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:22.972137   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:23.012662   62996 cri.go:89] found id: ""
	I0914 18:12:23.012694   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.012705   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:23.012712   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:23.012772   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:23.058923   62996 cri.go:89] found id: ""
	I0914 18:12:23.058950   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.058958   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:23.058963   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:23.059011   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:23.098275   62996 cri.go:89] found id: ""
	I0914 18:12:23.098308   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.098320   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:23.098327   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:23.098380   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:23.133498   62996 cri.go:89] found id: ""
	I0914 18:12:23.133525   62996 logs.go:276] 0 containers: []
	W0914 18:12:23.133534   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:23.133542   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:23.133554   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:23.201430   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:23.201456   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:23.201470   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:23.282388   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:23.282424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:23.319896   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:23.319924   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:23.373629   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:23.373664   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:23.603351   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:26.103668   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.082824   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.582534   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:27.001617   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:29.002224   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:25.887183   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:25.901089   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:25.901168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:25.934112   62996 cri.go:89] found id: ""
	I0914 18:12:25.934138   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.934147   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:25.934153   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:25.934210   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:25.969202   62996 cri.go:89] found id: ""
	I0914 18:12:25.969228   62996 logs.go:276] 0 containers: []
	W0914 18:12:25.969236   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:25.969242   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:25.969300   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:26.005516   62996 cri.go:89] found id: ""
	I0914 18:12:26.005537   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.005545   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:26.005551   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:26.005622   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:26.039162   62996 cri.go:89] found id: ""
	I0914 18:12:26.039189   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.039199   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:26.039206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:26.039266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:26.073626   62996 cri.go:89] found id: ""
	I0914 18:12:26.073660   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.073674   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:26.073682   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:26.073752   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:26.112057   62996 cri.go:89] found id: ""
	I0914 18:12:26.112086   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.112097   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:26.112104   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:26.112168   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:26.145874   62996 cri.go:89] found id: ""
	I0914 18:12:26.145903   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.145915   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:26.145923   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:26.145978   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:26.178959   62996 cri.go:89] found id: ""
	I0914 18:12:26.178989   62996 logs.go:276] 0 containers: []
	W0914 18:12:26.178997   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:26.179005   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:26.179018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:26.251132   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:26.251156   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:26.251174   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:26.327488   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:26.327528   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:26.368444   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:26.368471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:26.422676   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:26.422715   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:28.936784   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:28.960435   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:28.960515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:29.012679   62996 cri.go:89] found id: ""
	I0914 18:12:29.012710   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.012721   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:29.012729   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:29.012786   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:29.045058   62996 cri.go:89] found id: ""
	I0914 18:12:29.045091   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.045102   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:29.045115   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:29.045180   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:29.079176   62996 cri.go:89] found id: ""
	I0914 18:12:29.079202   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.079209   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:29.079216   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:29.079279   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:29.114288   62996 cri.go:89] found id: ""
	I0914 18:12:29.114317   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.114337   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:29.114344   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:29.114404   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:29.147554   62996 cri.go:89] found id: ""
	I0914 18:12:29.147578   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.147586   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:29.147592   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:29.147653   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:29.181739   62996 cri.go:89] found id: ""
	I0914 18:12:29.181767   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.181775   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:29.181781   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:29.181825   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:29.220328   62996 cri.go:89] found id: ""
	I0914 18:12:29.220356   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.220364   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:29.220373   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:29.220429   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:29.250900   62996 cri.go:89] found id: ""
	I0914 18:12:29.250929   62996 logs.go:276] 0 containers: []
	W0914 18:12:29.250941   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:29.250951   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:29.250966   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:29.287790   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:29.287820   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:29.338153   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:29.338194   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:29.351520   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:29.351547   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:29.421429   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:29.421457   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:29.421471   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:28.104044   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.602717   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:30.083027   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:32.083454   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:34.582698   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.002459   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:33.500924   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:31.997578   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:32.011256   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:32.011331   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:32.043761   62996 cri.go:89] found id: ""
	I0914 18:12:32.043793   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.043801   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:32.043806   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:32.043859   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:32.076497   62996 cri.go:89] found id: ""
	I0914 18:12:32.076526   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.076536   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:32.076543   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:32.076609   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:32.115059   62996 cri.go:89] found id: ""
	I0914 18:12:32.115084   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.115094   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:32.115100   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:32.115159   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:32.153078   62996 cri.go:89] found id: ""
	I0914 18:12:32.153109   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.153124   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:32.153130   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:32.153179   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:32.190539   62996 cri.go:89] found id: ""
	I0914 18:12:32.190621   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.190638   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:32.190647   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:32.190700   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:32.231917   62996 cri.go:89] found id: ""
	I0914 18:12:32.231941   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.231949   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:32.231955   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:32.232013   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:32.266197   62996 cri.go:89] found id: ""
	I0914 18:12:32.266227   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.266238   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:32.266245   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:32.266312   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.299357   62996 cri.go:89] found id: ""
	I0914 18:12:32.299387   62996 logs.go:276] 0 containers: []
	W0914 18:12:32.299398   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:32.299409   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:32.299424   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:32.353225   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:32.353268   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:32.368228   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:32.368280   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:32.447802   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:32.447829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:32.447847   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:32.523749   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:32.523788   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.063750   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:35.078487   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:35.078565   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:35.112949   62996 cri.go:89] found id: ""
	I0914 18:12:35.112994   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.113008   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:35.113015   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:35.113068   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:35.146890   62996 cri.go:89] found id: ""
	I0914 18:12:35.146921   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.146933   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:35.146941   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:35.147019   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:35.181077   62996 cri.go:89] found id: ""
	I0914 18:12:35.181106   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.181116   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:35.181123   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:35.181194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:35.214142   62996 cri.go:89] found id: ""
	I0914 18:12:35.214191   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.214203   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:35.214215   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:35.214275   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:35.246615   62996 cri.go:89] found id: ""
	I0914 18:12:35.246644   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.246655   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:35.246662   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:35.246722   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:35.278996   62996 cri.go:89] found id: ""
	I0914 18:12:35.279027   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.279038   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:35.279047   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:35.279104   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:35.312612   62996 cri.go:89] found id: ""
	I0914 18:12:35.312641   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.312650   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:35.312655   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:35.312711   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:32.603673   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.103528   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:37.081632   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.082269   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.501391   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:38.000592   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:40.001479   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:35.347717   62996 cri.go:89] found id: ""
	I0914 18:12:35.347741   62996 logs.go:276] 0 containers: []
	W0914 18:12:35.347749   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:35.347757   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:35.347767   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:35.389062   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:35.389090   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:35.437235   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:35.437277   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:35.452236   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:35.452275   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:35.523334   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:35.523371   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:35.523396   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.105613   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:38.119147   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:38.119214   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:38.158373   62996 cri.go:89] found id: ""
	I0914 18:12:38.158397   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.158404   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:38.158410   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:38.158467   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:38.192376   62996 cri.go:89] found id: ""
	I0914 18:12:38.192409   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.192421   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:38.192429   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:38.192490   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:38.230390   62996 cri.go:89] found id: ""
	I0914 18:12:38.230413   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.230422   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:38.230427   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:38.230476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:38.266608   62996 cri.go:89] found id: ""
	I0914 18:12:38.266634   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.266642   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:38.266648   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:38.266704   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:38.299437   62996 cri.go:89] found id: ""
	I0914 18:12:38.299462   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.299471   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:38.299477   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:38.299548   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:38.331092   62996 cri.go:89] found id: ""
	I0914 18:12:38.331119   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.331128   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:38.331135   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:38.331194   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:38.364447   62996 cri.go:89] found id: ""
	I0914 18:12:38.364475   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.364485   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:38.364491   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:38.364564   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:38.396977   62996 cri.go:89] found id: ""
	I0914 18:12:38.397001   62996 logs.go:276] 0 containers: []
	W0914 18:12:38.397011   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:38.397022   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:38.397036   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:38.477413   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:38.477449   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:38.515003   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:38.515031   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:38.567177   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:38.567222   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:38.580840   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:38.580876   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:38.654520   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:37.602537   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:39.603422   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.082861   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:43.583680   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:42.002259   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.500927   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:41.154728   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:41.167501   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:41.167578   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:41.200209   62996 cri.go:89] found id: ""
	I0914 18:12:41.200243   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.200254   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:41.200260   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:41.200309   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:41.232386   62996 cri.go:89] found id: ""
	I0914 18:12:41.232415   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.232425   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:41.232432   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:41.232515   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:41.268259   62996 cri.go:89] found id: ""
	I0914 18:12:41.268285   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.268295   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:41.268303   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:41.268374   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:41.299952   62996 cri.go:89] found id: ""
	I0914 18:12:41.299984   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.299992   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:41.299998   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:41.300055   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:41.331851   62996 cri.go:89] found id: ""
	I0914 18:12:41.331877   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.331886   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:41.331892   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:41.331941   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:41.373747   62996 cri.go:89] found id: ""
	I0914 18:12:41.373778   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.373789   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:41.373797   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:41.373847   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:41.410186   62996 cri.go:89] found id: ""
	I0914 18:12:41.410217   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.410228   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:41.410235   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:41.410296   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:41.443926   62996 cri.go:89] found id: ""
	I0914 18:12:41.443961   62996 logs.go:276] 0 containers: []
	W0914 18:12:41.443972   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:41.443983   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:41.443998   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:41.457188   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:41.457226   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:41.525140   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:41.525165   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:41.525179   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:41.603829   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:41.603858   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:41.641462   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:41.641495   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.194009   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:44.207043   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:44.207112   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:44.240082   62996 cri.go:89] found id: ""
	I0914 18:12:44.240104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.240112   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:44.240117   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:44.240177   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:44.271608   62996 cri.go:89] found id: ""
	I0914 18:12:44.271642   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.271653   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:44.271660   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:44.271721   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:44.308447   62996 cri.go:89] found id: ""
	I0914 18:12:44.308475   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.308484   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:44.308490   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:44.308552   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:44.340399   62996 cri.go:89] found id: ""
	I0914 18:12:44.340430   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.340440   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:44.340446   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:44.340502   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:44.374078   62996 cri.go:89] found id: ""
	I0914 18:12:44.374104   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.374112   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:44.374118   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:44.374190   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:44.408933   62996 cri.go:89] found id: ""
	I0914 18:12:44.408963   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.408974   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:44.408982   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:44.409040   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:44.444019   62996 cri.go:89] found id: ""
	I0914 18:12:44.444046   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.444063   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:44.444070   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:44.444126   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:44.477033   62996 cri.go:89] found id: ""
	I0914 18:12:44.477058   62996 logs.go:276] 0 containers: []
	W0914 18:12:44.477066   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:44.477075   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:44.477086   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:44.530118   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:44.530151   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:44.543295   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:44.543327   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:44.614448   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:44.614474   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:44.614488   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:44.690708   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:44.690744   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:42.103521   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:44.603744   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:46.082955   62554 pod_ready.go:103] pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:48.576914   62554 pod_ready.go:82] duration metric: took 4m0.000963266s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" ...
	E0914 18:12:48.576953   62554 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-stwfz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:12:48.576972   62554 pod_ready.go:39] duration metric: took 4m11.061091965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:12:48.576996   62554 kubeadm.go:597] duration metric: took 4m18.578277603s to restartPrimaryControlPlane
	W0914 18:12:48.577052   62554 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:48.577082   62554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:46.501278   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.001649   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:47.229658   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:47.242715   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:47.242785   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:47.278275   62996 cri.go:89] found id: ""
	I0914 18:12:47.278298   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.278305   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:47.278311   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:47.278365   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.313954   62996 cri.go:89] found id: ""
	I0914 18:12:47.313977   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.313985   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:47.313991   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:47.314045   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:47.350944   62996 cri.go:89] found id: ""
	I0914 18:12:47.350972   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.350983   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:47.350990   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:47.351052   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:47.384810   62996 cri.go:89] found id: ""
	I0914 18:12:47.384838   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.384850   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:47.384857   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:47.384918   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:47.420380   62996 cri.go:89] found id: ""
	I0914 18:12:47.420406   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.420419   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:47.420425   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:47.420476   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:47.453967   62996 cri.go:89] found id: ""
	I0914 18:12:47.453995   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.454003   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:47.454009   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:47.454060   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:47.488588   62996 cri.go:89] found id: ""
	I0914 18:12:47.488616   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.488627   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:47.488633   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:47.488696   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:47.522970   62996 cri.go:89] found id: ""
	I0914 18:12:47.523004   62996 logs.go:276] 0 containers: []
	W0914 18:12:47.523015   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:47.523025   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:47.523039   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:47.575977   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:47.576026   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:47.590854   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:47.590884   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:47.662149   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:47.662200   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:47.662215   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:47.740447   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:47.740482   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.279512   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:50.292294   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:50.292377   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:50.330928   62996 cri.go:89] found id: ""
	I0914 18:12:50.330960   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.330972   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:50.330980   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:50.331036   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:47.103834   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:49.104052   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.603479   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:51.500469   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:53.500885   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:50.363656   62996 cri.go:89] found id: ""
	I0914 18:12:50.363687   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.363696   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:50.363702   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:50.363756   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:50.395071   62996 cri.go:89] found id: ""
	I0914 18:12:50.395096   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.395107   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:50.395113   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:50.395172   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:50.428461   62996 cri.go:89] found id: ""
	I0914 18:12:50.428487   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.428495   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:50.428502   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:50.428549   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:50.461059   62996 cri.go:89] found id: ""
	I0914 18:12:50.461089   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.461098   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:50.461105   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:50.461155   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:50.495447   62996 cri.go:89] found id: ""
	I0914 18:12:50.495481   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.495492   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:50.495500   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:50.495574   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:50.529535   62996 cri.go:89] found id: ""
	I0914 18:12:50.529563   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.529573   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:50.529580   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:50.529640   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:50.564648   62996 cri.go:89] found id: ""
	I0914 18:12:50.564679   62996 logs.go:276] 0 containers: []
	W0914 18:12:50.564689   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:50.564699   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:50.564710   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:50.639039   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:50.639066   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:50.639081   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:50.715636   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:50.715675   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:50.752973   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:50.753002   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:50.804654   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:50.804692   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.319420   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:53.332322   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:12:53.332414   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:12:53.370250   62996 cri.go:89] found id: ""
	I0914 18:12:53.370287   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.370298   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:12:53.370306   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:12:53.370359   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:12:53.405394   62996 cri.go:89] found id: ""
	I0914 18:12:53.405422   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.405434   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:12:53.405442   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:12:53.405501   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:12:53.439653   62996 cri.go:89] found id: ""
	I0914 18:12:53.439684   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.439693   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:12:53.439699   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:12:53.439747   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:12:53.472491   62996 cri.go:89] found id: ""
	I0914 18:12:53.472520   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.472531   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:12:53.472537   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:12:53.472598   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:12:53.506837   62996 cri.go:89] found id: ""
	I0914 18:12:53.506862   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.506870   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:12:53.506877   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:12:53.506940   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:12:53.538229   62996 cri.go:89] found id: ""
	I0914 18:12:53.538256   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.538267   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:12:53.538274   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:12:53.538340   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:12:53.570628   62996 cri.go:89] found id: ""
	I0914 18:12:53.570654   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.570665   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:12:53.570672   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:12:53.570736   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:12:53.606147   62996 cri.go:89] found id: ""
	I0914 18:12:53.606188   62996 logs.go:276] 0 containers: []
	W0914 18:12:53.606199   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:12:53.606210   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:12:53.606236   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:12:53.675807   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:12:53.675829   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:12:53.675844   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:12:53.758491   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:12:53.758530   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:12:53.796006   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:12:53.796038   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:12:53.844935   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:12:53.844972   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:12:53.604109   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.104639   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:56.360696   62996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:12:56.374916   62996 kubeadm.go:597] duration metric: took 4m2.856242026s to restartPrimaryControlPlane
	W0914 18:12:56.374982   62996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:12:56.375003   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:12:57.043509   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:12:57.059022   62996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:12:57.070295   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:12:57.080854   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:12:57.080875   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:12:57.080917   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:12:57.091221   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:12:57.091320   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:12:57.102011   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:12:57.111389   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:12:57.111451   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:12:57.120508   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.129086   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:12:57.129162   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:12:57.138193   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:12:57.146637   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:12:57.146694   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:12:57.155659   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:12:57.230872   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:12:57.230955   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:12:57.369118   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:12:57.369267   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:12:57.369422   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:12:57.560020   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:12:57.561972   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:12:57.562086   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:12:57.562180   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:12:57.562311   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:12:57.562370   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:12:57.562426   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:12:57.562473   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:12:57.562562   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:12:57.562654   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:12:57.563036   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:12:57.563429   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:12:57.563514   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:12:57.563592   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:12:57.677534   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:12:57.910852   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:12:58.037495   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:12:58.325552   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:12:58.339574   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:12:58.340671   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:12:58.340740   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:12:58.485582   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:12:55.501202   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:57.501413   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:00.000020   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:12:58.488706   62996 out.go:235]   - Booting up control plane ...
	I0914 18:12:58.488863   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:12:58.496924   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:12:58.499125   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:12:58.500762   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:12:58.504049   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:12:58.604461   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:01.102988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:02.001195   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:04.001938   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:03.603700   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.103294   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:06.501564   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:09.002049   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:08.604408   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:11.103401   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:14.788734   62554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.2116254s)
	I0914 18:13:14.788816   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:14.810488   62554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:13:14.827773   62554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:13:14.846933   62554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:13:14.846958   62554 kubeadm.go:157] found existing configuration files:
	
	I0914 18:13:14.847011   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:13:14.859886   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:13:14.859954   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:13:14.882400   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:13:14.896700   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:13:14.896779   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:13:14.908567   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.920718   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:13:14.920791   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:13:14.930849   62554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:13:14.940757   62554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:13:14.940829   62554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:13:14.950828   62554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:13:15.000219   62554 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:13:15.000292   62554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:13:15.116662   62554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:13:15.116830   62554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:13:15.116937   62554 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:13:15.128493   62554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:13:11.002219   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:13.500397   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.130231   62554 out.go:235]   - Generating certificates and keys ...
	I0914 18:13:15.130322   62554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:13:15.130412   62554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:13:15.130513   62554 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:13:15.130642   62554 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:13:15.130762   62554 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:13:15.130842   62554 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:13:15.130927   62554 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:13:15.131020   62554 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:13:15.131131   62554 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:13:15.131235   62554 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:13:15.131325   62554 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:13:15.131417   62554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:13:15.454691   62554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:13:15.653046   62554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:13:15.704029   62554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:13:15.846280   62554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:13:15.926881   62554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:13:15.927633   62554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:13:15.932596   62554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:13:13.602971   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.603335   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:15.934499   62554 out.go:235]   - Booting up control plane ...
	I0914 18:13:15.934626   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:13:15.934761   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:13:15.934913   62554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:13:15.952982   62554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:13:15.961449   62554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:13:15.961526   62554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:13:16.102126   62554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:13:16.102335   62554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:13:16.604217   62554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.082287ms
	I0914 18:13:16.604330   62554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:13:15.501231   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:17.501427   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:19.501641   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.609408   62554 kubeadm.go:310] [api-check] The API server is healthy after 5.002255971s
	I0914 18:13:21.622798   62554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:13:21.637103   62554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:13:21.676498   62554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:13:21.676739   62554 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-044534 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:13:21.697522   62554 kubeadm.go:310] [bootstrap-token] Using token: oo4rrp.xx4py1wjxiu1i6la
	I0914 18:13:17.604060   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:20.103115   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:21.699311   62554 out.go:235]   - Configuring RBAC rules ...
	I0914 18:13:21.699462   62554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:13:21.711614   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:13:21.721449   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:13:21.727812   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:13:21.733486   62554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:13:21.747521   62554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:13:22.014670   62554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:13:22.463865   62554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:13:23.016165   62554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:13:23.016195   62554 kubeadm.go:310] 
	I0914 18:13:23.016257   62554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:13:23.016265   62554 kubeadm.go:310] 
	I0914 18:13:23.016385   62554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:13:23.016415   62554 kubeadm.go:310] 
	I0914 18:13:23.016456   62554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:13:23.016542   62554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:13:23.016627   62554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:13:23.016637   62554 kubeadm.go:310] 
	I0914 18:13:23.016753   62554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:13:23.016778   62554 kubeadm.go:310] 
	I0914 18:13:23.016850   62554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:13:23.016860   62554 kubeadm.go:310] 
	I0914 18:13:23.016937   62554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:13:23.017051   62554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:13:23.017142   62554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:13:23.017156   62554 kubeadm.go:310] 
	I0914 18:13:23.017284   62554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:13:23.017403   62554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:13:23.017419   62554 kubeadm.go:310] 
	I0914 18:13:23.017533   62554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.017664   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:13:23.017700   62554 kubeadm.go:310] 	--control-plane 
	I0914 18:13:23.017710   62554 kubeadm.go:310] 
	I0914 18:13:23.017821   62554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:13:23.017832   62554 kubeadm.go:310] 
	I0914 18:13:23.017944   62554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oo4rrp.xx4py1wjxiu1i6la \
	I0914 18:13:23.018104   62554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:13:23.019098   62554 kubeadm.go:310] W0914 18:13:14.968906    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019512   62554 kubeadm.go:310] W0914 18:13:14.970621    2543 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:13:23.019672   62554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:13:23.019690   62554 cni.go:84] Creating CNI manager for ""
	I0914 18:13:23.019704   62554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:13:23.021459   62554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:13:23.022517   62554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:13:23.037352   62554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:13:23.062037   62554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:13:23.062132   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.062202   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-044534 minikube.k8s.io/updated_at=2024_09_14T18_13_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=embed-certs-044534 minikube.k8s.io/primary=true
	I0914 18:13:23.089789   62554 ops.go:34] apiserver oom_adj: -16
	I0914 18:13:23.246478   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:23.747419   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.247388   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:24.746913   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:21.502222   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.001757   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:25.247445   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:25.747417   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.247440   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.747262   62554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:13:26.847454   62554 kubeadm.go:1113] duration metric: took 3.78538549s to wait for elevateKubeSystemPrivileges
	I0914 18:13:26.847496   62554 kubeadm.go:394] duration metric: took 4m56.896825398s to StartCluster
	I0914 18:13:26.847521   62554 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.847618   62554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:13:26.850148   62554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:13:26.850488   62554 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.126 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:13:26.850562   62554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:13:26.850672   62554 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-044534"
	I0914 18:13:26.850690   62554 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-044534"
	W0914 18:13:26.850703   62554 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:13:26.850715   62554 addons.go:69] Setting default-storageclass=true in profile "embed-certs-044534"
	I0914 18:13:26.850734   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.850753   62554 config.go:182] Loaded profile config "embed-certs-044534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 18:13:26.850752   62554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-044534"
	I0914 18:13:26.850716   62554 addons.go:69] Setting metrics-server=true in profile "embed-certs-044534"
	I0914 18:13:26.850844   62554 addons.go:234] Setting addon metrics-server=true in "embed-certs-044534"
	W0914 18:13:26.850860   62554 addons.go:243] addon metrics-server should already be in state true
	I0914 18:13:26.850898   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.851174   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851204   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851214   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851235   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.851250   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.851273   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.852030   62554 out.go:177] * Verifying Kubernetes components...
	I0914 18:13:26.853580   62554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:13:26.868084   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0914 18:13:26.868135   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0914 18:13:26.868700   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.868787   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.869251   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869282   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.869637   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.869650   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.869714   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.870039   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.870232   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.870396   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.870454   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.871718   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0914 18:13:26.872337   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.872842   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.872870   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.873227   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.873942   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.873989   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.874235   62554 addons.go:234] Setting addon default-storageclass=true in "embed-certs-044534"
	W0914 18:13:26.874257   62554 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:13:26.874287   62554 host.go:66] Checking if "embed-certs-044534" exists ...
	I0914 18:13:26.874674   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.874721   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.887685   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0914 18:13:26.888211   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.888735   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.888753   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.889060   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.889233   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.891040   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.892012   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0914 18:13:26.892352   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.892798   62554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:13:26.892812   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.892845   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.893321   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.893987   62554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:13:26.894040   62554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:13:26.894059   62554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:26.894078   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:13:26.894102   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.897218   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0914 18:13:26.897776   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.897932   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.898631   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.898669   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.899315   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.899382   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.899395   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.899557   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.899698   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.899873   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.900433   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.900668   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.902863   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.904569   62554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:13:22.104620   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:24.603793   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.604247   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.905708   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:13:26.905729   62554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:13:26.905755   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.910848   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911333   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.911430   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.911568   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.911840   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.912025   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.912238   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:26.912625   62554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I0914 18:13:26.913014   62554 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:13:26.913653   62554 main.go:141] libmachine: Using API Version  1
	I0914 18:13:26.913668   62554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:13:26.914116   62554 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:13:26.914342   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetState
	I0914 18:13:26.916119   62554 main.go:141] libmachine: (embed-certs-044534) Calling .DriverName
	I0914 18:13:26.916332   62554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:26.916350   62554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:13:26.916369   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHHostname
	I0914 18:13:26.920129   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920769   62554 main.go:141] libmachine: (embed-certs-044534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:d3:8e", ip: ""} in network mk-embed-certs-044534: {Iface:virbr3 ExpiryTime:2024-09-14 19:00:16 +0000 UTC Type:0 Mac:52:54:00:f7:d3:8e Iaid: IPaddr:192.168.50.126 Prefix:24 Hostname:embed-certs-044534 Clientid:01:52:54:00:f7:d3:8e}
	I0914 18:13:26.920791   62554 main.go:141] libmachine: (embed-certs-044534) DBG | domain embed-certs-044534 has defined IP address 192.168.50.126 and MAC address 52:54:00:f7:d3:8e in network mk-embed-certs-044534
	I0914 18:13:26.920971   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHPort
	I0914 18:13:26.921170   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHKeyPath
	I0914 18:13:26.921291   62554 main.go:141] libmachine: (embed-certs-044534) Calling .GetSSHUsername
	I0914 18:13:26.921413   62554 sshutil.go:53] new ssh client: &{IP:192.168.50.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/embed-certs-044534/id_rsa Username:docker}
	I0914 18:13:27.055184   62554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:13:27.072683   62554 node_ready.go:35] waiting up to 6m0s for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084289   62554 node_ready.go:49] node "embed-certs-044534" has status "Ready":"True"
	I0914 18:13:27.084317   62554 node_ready.go:38] duration metric: took 11.599354ms for node "embed-certs-044534" to be "Ready" ...
	I0914 18:13:27.084326   62554 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:27.090428   62554 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:27.258854   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:13:27.260576   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:13:27.261092   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:13:27.261115   62554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:13:27.332882   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:13:27.332914   62554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:13:27.400159   62554 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:27.400193   62554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:13:27.486731   62554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:13:28.164139   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164171   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164215   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164242   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164581   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164593   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164596   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164597   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164608   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164569   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164619   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164621   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.164627   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164629   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.164874   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164897   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164911   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.164902   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.164929   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.164941   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196171   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.196197   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.196530   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.196590   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.196634   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.509915   62554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023114908s)
	I0914 18:13:28.509973   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.509989   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510276   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510329   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510348   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510365   62554 main.go:141] libmachine: Making call to close driver server
	I0914 18:13:28.510374   62554 main.go:141] libmachine: (embed-certs-044534) Calling .Close
	I0914 18:13:28.510614   62554 main.go:141] libmachine: (embed-certs-044534) DBG | Closing plugin on server side
	I0914 18:13:28.510653   62554 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:13:28.510665   62554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:13:28.510678   62554 addons.go:475] Verifying addon metrics-server=true in "embed-certs-044534"
	I0914 18:13:28.512283   62554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:13:28.513593   62554 addons.go:510] duration metric: took 1.663035459s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:13:29.103964   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:26.501135   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.502181   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:28.605176   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.102817   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.596452   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:33.596699   62554 pod_ready.go:103] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:31.001070   63448 pod_ready.go:103] pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:32.001946   63448 pod_ready.go:82] duration metric: took 4m0.00767403s for pod "metrics-server-6867b74b74-7v8dr" in "kube-system" namespace to be "Ready" ...
	E0914 18:13:32.001975   63448 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:13:32.001987   63448 pod_ready.go:39] duration metric: took 4m5.051544016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:32.002004   63448 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:32.002037   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:32.002093   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:32.053241   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.053276   63448 cri.go:89] found id: ""
	I0914 18:13:32.053287   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:32.053349   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.057854   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:32.057921   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:32.099294   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:32.099318   63448 cri.go:89] found id: ""
	I0914 18:13:32.099328   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:32.099375   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.103674   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:32.103745   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:32.144190   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:32.144219   63448 cri.go:89] found id: ""
	I0914 18:13:32.144228   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:32.144275   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.148382   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:32.148443   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:32.185779   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:32.185807   63448 cri.go:89] found id: ""
	I0914 18:13:32.185814   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:32.185864   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.189478   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:32.189545   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:32.224657   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.224681   63448 cri.go:89] found id: ""
	I0914 18:13:32.224690   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:32.224745   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.228421   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:32.228494   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:32.262491   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:32.262513   63448 cri.go:89] found id: ""
	I0914 18:13:32.262519   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:32.262579   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.266135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:32.266213   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:32.300085   63448 cri.go:89] found id: ""
	I0914 18:13:32.300111   63448 logs.go:276] 0 containers: []
	W0914 18:13:32.300119   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:32.300124   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:32.300181   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:32.335359   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:32.335379   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.335387   63448 cri.go:89] found id: ""
	I0914 18:13:32.335393   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:32.335451   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.339404   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:32.343173   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:32.343203   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:32.378987   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:32.379016   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:32.418829   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:32.418855   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:32.941046   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:32.941102   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:32.998148   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:32.998209   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:33.041208   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:33.041241   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:33.080774   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:33.080806   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:33.130519   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:33.130552   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:33.182751   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:33.182788   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:33.222008   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:33.222053   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:33.263100   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:33.263137   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:33.330307   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:33.330343   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:33.344658   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:33.344687   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:35.597157   62554 pod_ready.go:93] pod "etcd-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:35.597179   62554 pod_ready.go:82] duration metric: took 8.50672651s for pod "etcd-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:35.597189   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604147   62554 pod_ready.go:93] pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.604179   62554 pod_ready.go:82] duration metric: took 1.006982094s for pod "kube-apiserver-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.604192   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610278   62554 pod_ready.go:93] pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.610302   62554 pod_ready.go:82] duration metric: took 6.101843ms for pod "kube-controller-manager-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.610315   62554 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615527   62554 pod_ready.go:93] pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace has status "Ready":"True"
	I0914 18:13:36.615549   62554 pod_ready.go:82] duration metric: took 5.226206ms for pod "kube-scheduler-embed-certs-044534" in "kube-system" namespace to be "Ready" ...
	I0914 18:13:36.615559   62554 pod_ready.go:39] duration metric: took 9.531222215s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:13:36.615587   62554 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:13:36.615642   62554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.630381   62554 api_server.go:72] duration metric: took 9.779851335s to wait for apiserver process to appear ...
	I0914 18:13:36.630414   62554 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.630438   62554 api_server.go:253] Checking apiserver healthz at https://192.168.50.126:8443/healthz ...
	I0914 18:13:36.637559   62554 api_server.go:279] https://192.168.50.126:8443/healthz returned 200:
	ok
	I0914 18:13:36.639973   62554 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:36.639999   62554 api_server.go:131] duration metric: took 9.577574ms to wait for apiserver health ...
	I0914 18:13:36.640006   62554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:36.647412   62554 system_pods.go:59] 9 kube-system pods found
	I0914 18:13:36.647443   62554 system_pods.go:61] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.647448   62554 system_pods.go:61] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.647452   62554 system_pods.go:61] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.647456   62554 system_pods.go:61] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.647459   62554 system_pods.go:61] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.647463   62554 system_pods.go:61] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.647465   62554 system_pods.go:61] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.647471   62554 system_pods.go:61] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.647475   62554 system_pods.go:61] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.647483   62554 system_pods.go:74] duration metric: took 7.47115ms to wait for pod list to return data ...
	I0914 18:13:36.647490   62554 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:36.650678   62554 default_sa.go:45] found service account: "default"
	I0914 18:13:36.650722   62554 default_sa.go:55] duration metric: took 3.225438ms for default service account to be created ...
	I0914 18:13:36.650733   62554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:36.656461   62554 system_pods.go:86] 9 kube-system pods found
	I0914 18:13:36.656489   62554 system_pods.go:89] "coredns-7c65d6cfc9-67dsl" [480fe6ea-838d-4048-9893-947d43e7b5c9] Running
	I0914 18:13:36.656495   62554 system_pods.go:89] "coredns-7c65d6cfc9-9j6sv" [87c28a4b-015e-46b8-a462-9dc6ed06d914] Running
	I0914 18:13:36.656499   62554 system_pods.go:89] "etcd-embed-certs-044534" [a9533b38-298e-4435-aaf2-262bdf629832] Running
	I0914 18:13:36.656503   62554 system_pods.go:89] "kube-apiserver-embed-certs-044534" [28ee0ec6-8ede-447a-b4eb-1db9eaeb76fb] Running
	I0914 18:13:36.656507   62554 system_pods.go:89] "kube-controller-manager-embed-certs-044534" [0fd6fe49-8994-48de-8e57-a2420f12b47d] Running
	I0914 18:13:36.656512   62554 system_pods.go:89] "kube-proxy-26fx6" [1cb48201-6caf-4787-9e27-a55885a8ae2a] Running
	I0914 18:13:36.656516   62554 system_pods.go:89] "kube-scheduler-embed-certs-044534" [6c1535f8-267b-401c-a93e-0ac057c75047] Running
	I0914 18:13:36.656522   62554 system_pods.go:89] "metrics-server-6867b74b74-rrfnt" [a1deacaa-9b90-49ac-8b8b-0bd909b5f6e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:36.656525   62554 system_pods.go:89] "storage-provisioner" [dec7a14c-b6f7-464f-86b3-5f7d8063d8e0] Running
	I0914 18:13:36.656534   62554 system_pods.go:126] duration metric: took 5.795433ms to wait for k8s-apps to be running ...
	I0914 18:13:36.656541   62554 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:36.656586   62554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:36.673166   62554 system_svc.go:56] duration metric: took 16.609444ms WaitForService to wait for kubelet
	I0914 18:13:36.673205   62554 kubeadm.go:582] duration metric: took 9.822681909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:36.673227   62554 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:36.794984   62554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:36.795013   62554 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:36.795024   62554 node_conditions.go:105] duration metric: took 121.79122ms to run NodePressure ...
	I0914 18:13:36.795038   62554 start.go:241] waiting for startup goroutines ...
	I0914 18:13:36.795047   62554 start.go:246] waiting for cluster config update ...
	I0914 18:13:36.795060   62554 start.go:255] writing updated cluster config ...
	I0914 18:13:36.795406   62554 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:36.847454   62554 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:36.849605   62554 out.go:177] * Done! kubectl is now configured to use "embed-certs-044534" cluster and "default" namespace by default
	I0914 18:13:33.105197   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.604458   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:35.989800   63448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:13:36.006371   63448 api_server.go:72] duration metric: took 4m14.310539233s to wait for apiserver process to appear ...
	I0914 18:13:36.006405   63448 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:13:36.006446   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:36.006508   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:36.044973   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:36.044992   63448 cri.go:89] found id: ""
	I0914 18:13:36.045000   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:36.045055   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.049371   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:36.049449   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:36.097114   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.097139   63448 cri.go:89] found id: ""
	I0914 18:13:36.097148   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:36.097212   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.102084   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:36.102153   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:36.140640   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.140662   63448 cri.go:89] found id: ""
	I0914 18:13:36.140671   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:36.140728   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.144624   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:36.144696   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:36.179135   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.179156   63448 cri.go:89] found id: ""
	I0914 18:13:36.179163   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:36.179216   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.183050   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:36.183110   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:36.222739   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:36.222758   63448 cri.go:89] found id: ""
	I0914 18:13:36.222765   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:36.222812   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.226715   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:36.226782   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:36.261587   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:36.261610   63448 cri.go:89] found id: ""
	I0914 18:13:36.261617   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:36.261664   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.265541   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:36.265614   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:36.301521   63448 cri.go:89] found id: ""
	I0914 18:13:36.301546   63448 logs.go:276] 0 containers: []
	W0914 18:13:36.301554   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:36.301560   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:36.301622   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:36.335332   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.335355   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.335358   63448 cri.go:89] found id: ""
	I0914 18:13:36.335365   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:36.335415   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.339542   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:36.343543   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:36.343570   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:36.384224   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:36.384259   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:36.428010   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:36.428041   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:36.469679   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:36.469708   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:36.507570   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:36.507597   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:36.543300   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:36.543335   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:36.619060   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:36.619084   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:36.633542   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:36.633572   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:36.741334   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:36.741370   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:37.231208   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:37.231255   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:37.278835   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:37.278863   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:37.320359   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:37.320399   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:37.357940   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:37.357974   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:39.913586   63448 api_server.go:253] Checking apiserver healthz at https://192.168.61.38:8444/healthz ...
	I0914 18:13:39.917590   63448 api_server.go:279] https://192.168.61.38:8444/healthz returned 200:
	ok
	I0914 18:13:39.918633   63448 api_server.go:141] control plane version: v1.31.1
	I0914 18:13:39.918653   63448 api_server.go:131] duration metric: took 3.912241678s to wait for apiserver health ...
	I0914 18:13:39.918660   63448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:13:39.918682   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:13:39.918727   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:13:39.961919   63448 cri.go:89] found id: "6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:39.961947   63448 cri.go:89] found id: ""
	I0914 18:13:39.961956   63448 logs.go:276] 1 containers: [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4]
	I0914 18:13:39.962012   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:39.965756   63448 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:13:39.965838   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:13:40.008044   63448 cri.go:89] found id: "7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.008066   63448 cri.go:89] found id: ""
	I0914 18:13:40.008074   63448 logs.go:276] 1 containers: [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377]
	I0914 18:13:40.008117   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.012505   63448 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:13:40.012569   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:13:40.059166   63448 cri.go:89] found id: "02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.059194   63448 cri.go:89] found id: ""
	I0914 18:13:40.059204   63448 logs.go:276] 1 containers: [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86]
	I0914 18:13:40.059267   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.063135   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:13:40.063197   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:13:40.105220   63448 cri.go:89] found id: "a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.105245   63448 cri.go:89] found id: ""
	I0914 18:13:40.105255   63448 logs.go:276] 1 containers: [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b]
	I0914 18:13:40.105308   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.109907   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:13:40.109978   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:13:40.146307   63448 cri.go:89] found id: "a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.146337   63448 cri.go:89] found id: ""
	I0914 18:13:40.146349   63448 logs.go:276] 1 containers: [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d]
	I0914 18:13:40.146396   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.150369   63448 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:13:40.150436   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:13:40.185274   63448 cri.go:89] found id: "09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.185301   63448 cri.go:89] found id: ""
	I0914 18:13:40.185312   63448 logs.go:276] 1 containers: [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94]
	I0914 18:13:40.185374   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.189425   63448 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:13:40.189499   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:13:40.223289   63448 cri.go:89] found id: ""
	I0914 18:13:40.223311   63448 logs.go:276] 0 containers: []
	W0914 18:13:40.223319   63448 logs.go:278] No container was found matching "kindnet"
	I0914 18:13:40.223324   63448 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:13:40.223369   63448 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:13:40.257779   63448 cri.go:89] found id: "be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.257805   63448 cri.go:89] found id: "b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.257811   63448 cri.go:89] found id: ""
	I0914 18:13:40.257820   63448 logs.go:276] 2 containers: [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e]
	I0914 18:13:40.257880   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.262388   63448 ssh_runner.go:195] Run: which crictl
	I0914 18:13:40.266233   63448 logs.go:123] Gathering logs for kube-apiserver [6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4] ...
	I0914 18:13:40.266258   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c532e45713d01c37f899bd4c9308e6c7fceb95accae13da1549ffb195e44bc4"
	I0914 18:13:38.505090   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:13:38.505605   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:38.505837   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:38.105234   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.604049   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:40.310145   63448 logs.go:123] Gathering logs for etcd [7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377] ...
	I0914 18:13:40.310188   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fb6567a7b9f36f21430774a159db944e9060eac3c8fdf9abc50b9c56a2b0377"
	I0914 18:13:40.358651   63448 logs.go:123] Gathering logs for coredns [02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86] ...
	I0914 18:13:40.358686   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02a31bf75666cfcbe85dcb42259279071420c3b26de20d6d667bcd8ffef77d86"
	I0914 18:13:40.398107   63448 logs.go:123] Gathering logs for kube-scheduler [a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b] ...
	I0914 18:13:40.398144   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a390e6c0153550b206976ab4e172cf3236aac7e9e5671c332d8c0f8c4e567e0b"
	I0914 18:13:40.450540   63448 logs.go:123] Gathering logs for dmesg ...
	I0914 18:13:40.450573   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:13:40.465987   63448 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:13:40.466013   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:13:40.573299   63448 logs.go:123] Gathering logs for kube-proxy [a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d] ...
	I0914 18:13:40.573333   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5c3b65e96ba854d6ceb4a824ba5076d43cff0b7a1978617b69271d5b88cca4d"
	I0914 18:13:40.618201   63448 logs.go:123] Gathering logs for kube-controller-manager [09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94] ...
	I0914 18:13:40.618247   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09627c963da76d1a34f891fa3edf6c8c82f652f7308541d6f9083f5888f4bf94"
	I0914 18:13:40.671259   63448 logs.go:123] Gathering logs for storage-provisioner [be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277] ...
	I0914 18:13:40.671304   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0aa9c1761410495159b478552996da9670b728a9efd2985ec5cad6759e8277"
	I0914 18:13:40.708455   63448 logs.go:123] Gathering logs for storage-provisioner [b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e] ...
	I0914 18:13:40.708488   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b33f92ef722c8a6bde80bdbd3ff62a9b4b31f0bf548eec9aaadd4593a101017e"
	I0914 18:13:40.746662   63448 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:13:40.746696   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:13:41.108968   63448 logs.go:123] Gathering logs for container status ...
	I0914 18:13:41.109017   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:13:41.150925   63448 logs.go:123] Gathering logs for kubelet ...
	I0914 18:13:41.150968   63448 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:13:43.725606   63448 system_pods.go:59] 8 kube-system pods found
	I0914 18:13:43.725642   63448 system_pods.go:61] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.725650   63448 system_pods.go:61] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.725656   63448 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.725661   63448 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.725665   63448 system_pods.go:61] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.725670   63448 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.725680   63448 system_pods.go:61] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.725687   63448 system_pods.go:61] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.725699   63448 system_pods.go:74] duration metric: took 3.807031642s to wait for pod list to return data ...
	I0914 18:13:43.725710   63448 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:13:43.728384   63448 default_sa.go:45] found service account: "default"
	I0914 18:13:43.728409   63448 default_sa.go:55] duration metric: took 2.691817ms for default service account to be created ...
	I0914 18:13:43.728417   63448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:13:43.732884   63448 system_pods.go:86] 8 kube-system pods found
	I0914 18:13:43.732913   63448 system_pods.go:89] "coredns-7c65d6cfc9-8v8s7" [896b4fde-d17e-43a3-b7c8-b710e2e70e2c] Running
	I0914 18:13:43.732918   63448 system_pods.go:89] "etcd-default-k8s-diff-port-243449" [9201493d-45db-44f4-948d-34e1d1ddee8f] Running
	I0914 18:13:43.732922   63448 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-243449" [052b85bf-0f3b-4ace-9301-99e53f91cfcf] Running
	I0914 18:13:43.732926   63448 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-243449" [51214b37-f5e8-4037-83ff-1fd09b93e008] Running
	I0914 18:13:43.732931   63448 system_pods.go:89] "kube-proxy-gbkqm" [4308aacf-ea0a-4bba-8598-85ffaf959b7e] Running
	I0914 18:13:43.732935   63448 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-243449" [b6eacf30-47ec-4f8d-968c-195661a2a732] Running
	I0914 18:13:43.732942   63448 system_pods.go:89] "metrics-server-6867b74b74-7v8dr" [90be95af-c779-4b31-b261-2c4020a34280] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:13:43.732947   63448 system_pods.go:89] "storage-provisioner" [2e814601-a19a-4848-bed5-d9a29ffb3b5d] Running
	I0914 18:13:43.732954   63448 system_pods.go:126] duration metric: took 4.531761ms to wait for k8s-apps to be running ...
	I0914 18:13:43.732960   63448 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:13:43.733001   63448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:13:43.749535   63448 system_svc.go:56] duration metric: took 16.566498ms WaitForService to wait for kubelet
	I0914 18:13:43.749567   63448 kubeadm.go:582] duration metric: took 4m22.053742257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:13:43.749587   63448 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:13:43.752493   63448 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:13:43.752514   63448 node_conditions.go:123] node cpu capacity is 2
	I0914 18:13:43.752523   63448 node_conditions.go:105] duration metric: took 2.931821ms to run NodePressure ...
	I0914 18:13:43.752534   63448 start.go:241] waiting for startup goroutines ...
	I0914 18:13:43.752548   63448 start.go:246] waiting for cluster config update ...
	I0914 18:13:43.752560   63448 start.go:255] writing updated cluster config ...
	I0914 18:13:43.752815   63448 ssh_runner.go:195] Run: rm -f paused
	I0914 18:13:43.803181   63448 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:13:43.805150   63448 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-243449" cluster and "default" namespace by default
	I0914 18:13:43.506241   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:43.506502   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:43.103780   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:45.603666   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:47.603988   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:50.104811   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:53.506772   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:13:53.506959   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:13:52.604411   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:55.103339   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:57.103716   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:13:59.603423   62207 pod_ready.go:103] pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:00.097180   62207 pod_ready.go:82] duration metric: took 4m0.000345486s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" ...
	E0914 18:14:00.097209   62207 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-n276z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0914 18:14:00.097230   62207 pod_ready.go:39] duration metric: took 4m11.039838973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:00.097260   62207 kubeadm.go:597] duration metric: took 4m18.345876583s to restartPrimaryControlPlane
	W0914 18:14:00.097328   62207 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0914 18:14:00.097360   62207 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:13.507627   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:13.507840   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:26.392001   62207 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.294613232s)
	I0914 18:14:26.392082   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:26.410558   62207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 18:14:26.421178   62207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:26.430786   62207 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:26.430808   62207 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:26.430858   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:26.440193   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:26.440253   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:26.449848   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:26.459589   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:26.459651   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:26.469556   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.478722   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:26.478782   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:26.488694   62207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:26.498478   62207 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:26.498542   62207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:26.509455   62207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:26.552295   62207 kubeadm.go:310] W0914 18:14:26.530603    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.552908   62207 kubeadm.go:310] W0914 18:14:26.531307    2977 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 18:14:26.665962   62207 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:35.406074   62207 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 18:14:35.406150   62207 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:35.406251   62207 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:35.406372   62207 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:35.406503   62207 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 18:14:35.406611   62207 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:35.408167   62207 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:35.408257   62207 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:35.408337   62207 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:35.408451   62207 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:35.408550   62207 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:35.408655   62207 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:35.408733   62207 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:35.408823   62207 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:35.408916   62207 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:35.409022   62207 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:35.409133   62207 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:35.409176   62207 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:35.409225   62207 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:35.409269   62207 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:35.409328   62207 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 18:14:35.409374   62207 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:35.409440   62207 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:35.409507   62207 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:35.409633   62207 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:35.409734   62207 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:35.411984   62207 out.go:235]   - Booting up control plane ...
	I0914 18:14:35.412099   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:35.412212   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:35.412276   62207 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:35.412371   62207 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:35.412444   62207 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:35.412479   62207 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:35.412597   62207 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 18:14:35.412686   62207 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 18:14:35.412737   62207 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002422188s
	I0914 18:14:35.412801   62207 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 18:14:35.412863   62207 kubeadm.go:310] [api-check] The API server is healthy after 5.002046359s
	I0914 18:14:35.412986   62207 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 18:14:35.413129   62207 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 18:14:35.413208   62207 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 18:14:35.413427   62207 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-168587 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 18:14:35.413510   62207 kubeadm.go:310] [bootstrap-token] Using token: 2jk8ol.l80z6l7tm2nt4pl7
	I0914 18:14:35.414838   62207 out.go:235]   - Configuring RBAC rules ...
	I0914 18:14:35.414968   62207 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 18:14:35.415069   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 18:14:35.415291   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 18:14:35.415482   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 18:14:35.415615   62207 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 18:14:35.415725   62207 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 18:14:35.415867   62207 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 18:14:35.415930   62207 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 18:14:35.415990   62207 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 18:14:35.415999   62207 kubeadm.go:310] 
	I0914 18:14:35.416077   62207 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 18:14:35.416086   62207 kubeadm.go:310] 
	I0914 18:14:35.416187   62207 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 18:14:35.416198   62207 kubeadm.go:310] 
	I0914 18:14:35.416232   62207 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 18:14:35.416314   62207 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 18:14:35.416388   62207 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 18:14:35.416397   62207 kubeadm.go:310] 
	I0914 18:14:35.416474   62207 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 18:14:35.416484   62207 kubeadm.go:310] 
	I0914 18:14:35.416525   62207 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 18:14:35.416529   62207 kubeadm.go:310] 
	I0914 18:14:35.416597   62207 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 18:14:35.416701   62207 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 18:14:35.416781   62207 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 18:14:35.416796   62207 kubeadm.go:310] 
	I0914 18:14:35.416899   62207 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 18:14:35.416998   62207 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 18:14:35.417007   62207 kubeadm.go:310] 
	I0914 18:14:35.417125   62207 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417247   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 \
	I0914 18:14:35.417272   62207 kubeadm.go:310] 	--control-plane 
	I0914 18:14:35.417276   62207 kubeadm.go:310] 
	I0914 18:14:35.417399   62207 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 18:14:35.417422   62207 kubeadm.go:310] 
	I0914 18:14:35.417530   62207 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jk8ol.l80z6l7tm2nt4pl7 \
	I0914 18:14:35.417686   62207 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:30a8b6e98108730ec92a246ad3f4e746211e79f910581e46cd69c4cd78384600 
	I0914 18:14:35.417705   62207 cni.go:84] Creating CNI manager for ""
	I0914 18:14:35.417713   62207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 18:14:35.420023   62207 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 18:14:35.421095   62207 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 18:14:35.432619   62207 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 18:14:35.451720   62207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 18:14:35.451790   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:35.451836   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-168587 minikube.k8s.io/updated_at=2024_09_14T18_14_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=no-preload-168587 minikube.k8s.io/primary=true
	I0914 18:14:35.654681   62207 ops.go:34] apiserver oom_adj: -16
	I0914 18:14:35.654714   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.155376   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:36.655468   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.155741   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:37.655416   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.154935   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.655465   62207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 18:14:38.740860   62207 kubeadm.go:1113] duration metric: took 3.289121705s to wait for elevateKubeSystemPrivileges
	I0914 18:14:38.740912   62207 kubeadm.go:394] duration metric: took 4m57.036377829s to StartCluster
	I0914 18:14:38.740939   62207 settings.go:142] acquiring lock: {Name:mkee31cd57c3a91eff581c48ba7961460fb3e0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.741029   62207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 18:14:38.742754   62207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8806/kubeconfig: {Name:mk850f3e2f3c2ef1e81c795ae72e22e459e27140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:14:38.742977   62207 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 18:14:38.743138   62207 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:14:38.743260   62207 addons.go:69] Setting storage-provisioner=true in profile "no-preload-168587"
	I0914 18:14:38.743271   62207 addons.go:69] Setting default-storageclass=true in profile "no-preload-168587"
	I0914 18:14:38.743282   62207 addons.go:234] Setting addon storage-provisioner=true in "no-preload-168587"
	I0914 18:14:38.743290   62207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-168587"
	W0914 18:14:38.743295   62207 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:14:38.743306   62207 addons.go:69] Setting metrics-server=true in profile "no-preload-168587"
	I0914 18:14:38.743329   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743334   62207 addons.go:234] Setting addon metrics-server=true in "no-preload-168587"
	I0914 18:14:38.743362   62207 config.go:182] Loaded profile config "no-preload-168587": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W0914 18:14:38.743365   62207 addons.go:243] addon metrics-server should already be in state true
	I0914 18:14:38.743442   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743814   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.743843   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743821   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.743775   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.744070   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.744427   62207 out.go:177] * Verifying Kubernetes components...
	I0914 18:14:38.745716   62207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:14:38.760250   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0914 18:14:38.760329   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0914 18:14:38.760788   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.760810   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.761416   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761438   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.761581   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.761829   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.761980   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.762333   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.762445   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.762495   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.763295   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0914 18:14:38.763767   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.764256   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.764285   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.764616   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.765095   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765131   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.765525   62207 addons.go:234] Setting addon default-storageclass=true in "no-preload-168587"
	W0914 18:14:38.765544   62207 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:14:38.765568   62207 host.go:66] Checking if "no-preload-168587" exists ...
	I0914 18:14:38.765798   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.765837   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.782208   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0914 18:14:38.782527   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0914 18:14:38.782564   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0914 18:14:38.782679   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782943   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.782973   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.783413   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783433   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783554   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783566   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783573   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.783585   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.783956   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.783964   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784444   62207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 18:14:38.784482   62207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:14:38.784639   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.784666   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.784806   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.786340   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.786797   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.788188   62207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:14:38.788195   62207 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:14:38.789239   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:14:38.789254   62207 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:14:38.789273   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.789338   62207 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:38.789347   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:14:38.789358   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.792968   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793521   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.793853   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.793894   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794037   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794097   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.794107   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.794258   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794351   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.794499   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.794531   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794635   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.794716   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.794777   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.827254   62207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0914 18:14:38.827852   62207 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:14:38.828434   62207 main.go:141] libmachine: Using API Version  1
	I0914 18:14:38.828460   62207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:14:38.828837   62207 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:14:38.829088   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetState
	I0914 18:14:38.830820   62207 main.go:141] libmachine: (no-preload-168587) Calling .DriverName
	I0914 18:14:38.831031   62207 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:38.831048   62207 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:14:38.831067   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHHostname
	I0914 18:14:38.833822   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834242   62207 main.go:141] libmachine: (no-preload-168587) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:40:8a", ip: ""} in network mk-no-preload-168587: {Iface:virbr2 ExpiryTime:2024-09-14 18:59:50 +0000 UTC Type:0 Mac:52:54:00:4c:40:8a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:no-preload-168587 Clientid:01:52:54:00:4c:40:8a}
	I0914 18:14:38.834282   62207 main.go:141] libmachine: (no-preload-168587) DBG | domain no-preload-168587 has defined IP address 192.168.39.38 and MAC address 52:54:00:4c:40:8a in network mk-no-preload-168587
	I0914 18:14:38.834453   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHPort
	I0914 18:14:38.834641   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHKeyPath
	I0914 18:14:38.834794   62207 main.go:141] libmachine: (no-preload-168587) Calling .GetSSHUsername
	I0914 18:14:38.834963   62207 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/no-preload-168587/id_rsa Username:docker}
	I0914 18:14:38.920627   62207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:14:38.941951   62207 node_ready.go:35] waiting up to 6m0s for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973102   62207 node_ready.go:49] node "no-preload-168587" has status "Ready":"True"
	I0914 18:14:38.973124   62207 node_ready.go:38] duration metric: took 31.146661ms for node "no-preload-168587" to be "Ready" ...
	I0914 18:14:38.973132   62207 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:38.989712   62207 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:39.018196   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:14:39.018223   62207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:14:39.045691   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:14:39.066249   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:14:39.066277   62207 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:14:39.073017   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:14:39.118360   62207 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.118385   62207 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:14:39.195268   62207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:14:39.874924   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.874953   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.874950   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875004   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875398   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875406   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875457   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875466   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875476   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875406   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875430   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875598   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875609   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.875631   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.875914   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875916   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875934   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.875939   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:39.875959   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.875966   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:39.929860   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:39.929881   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:39.930191   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:39.930211   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.139888   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.139918   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140256   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140273   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140282   62207 main.go:141] libmachine: Making call to close driver server
	I0914 18:14:40.140289   62207 main.go:141] libmachine: (no-preload-168587) Calling .Close
	I0914 18:14:40.140608   62207 main.go:141] libmachine: Successfully made call to close driver server
	I0914 18:14:40.140620   62207 main.go:141] libmachine: (no-preload-168587) DBG | Closing plugin on server side
	I0914 18:14:40.140630   62207 main.go:141] libmachine: Making call to close connection to plugin binary
	I0914 18:14:40.140646   62207 addons.go:475] Verifying addon metrics-server=true in "no-preload-168587"
	I0914 18:14:40.142461   62207 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0914 18:14:40.143818   62207 addons.go:510] duration metric: took 1.400695696s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0914 18:14:40.996599   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:43.498584   62207 pod_ready.go:103] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"False"
	I0914 18:14:45.995938   62207 pod_ready.go:93] pod "etcd-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:45.995971   62207 pod_ready.go:82] duration metric: took 7.006220602s for pod "etcd-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:45.995984   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000589   62207 pod_ready.go:93] pod "kube-apiserver-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.000609   62207 pod_ready.go:82] duration metric: took 4.618617ms for pod "kube-apiserver-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.000620   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004865   62207 pod_ready.go:93] pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.004886   62207 pod_ready.go:82] duration metric: took 4.259787ms for pod "kube-controller-manager-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.004895   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009225   62207 pod_ready.go:93] pod "kube-proxy-xdj6b" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.009243   62207 pod_ready.go:82] duration metric: took 4.343161ms for pod "kube-proxy-xdj6b" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.009250   62207 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013312   62207 pod_ready.go:93] pod "kube-scheduler-no-preload-168587" in "kube-system" namespace has status "Ready":"True"
	I0914 18:14:46.013330   62207 pod_ready.go:82] duration metric: took 4.073817ms for pod "kube-scheduler-no-preload-168587" in "kube-system" namespace to be "Ready" ...
	I0914 18:14:46.013337   62207 pod_ready.go:39] duration metric: took 7.040196066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:14:46.013358   62207 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:14:46.013403   62207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:14:46.029881   62207 api_server.go:72] duration metric: took 7.286871398s to wait for apiserver process to appear ...
	I0914 18:14:46.029912   62207 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:14:46.029937   62207 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0914 18:14:46.034236   62207 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0914 18:14:46.035287   62207 api_server.go:141] control plane version: v1.31.1
	I0914 18:14:46.035305   62207 api_server.go:131] duration metric: took 5.385499ms to wait for apiserver health ...
	I0914 18:14:46.035314   62207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 18:14:46.196765   62207 system_pods.go:59] 9 kube-system pods found
	I0914 18:14:46.196796   62207 system_pods.go:61] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196804   62207 system_pods.go:61] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.196810   62207 system_pods.go:61] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.196816   62207 system_pods.go:61] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.196821   62207 system_pods.go:61] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.196824   62207 system_pods.go:61] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.196827   62207 system_pods.go:61] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.196832   62207 system_pods.go:61] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.196835   62207 system_pods.go:61] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.196842   62207 system_pods.go:74] duration metric: took 161.510322ms to wait for pod list to return data ...
	I0914 18:14:46.196853   62207 default_sa.go:34] waiting for default service account to be created ...
	I0914 18:14:46.394399   62207 default_sa.go:45] found service account: "default"
	I0914 18:14:46.394428   62207 default_sa.go:55] duration metric: took 197.566762ms for default service account to be created ...
	I0914 18:14:46.394443   62207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 18:14:46.596421   62207 system_pods.go:86] 9 kube-system pods found
	I0914 18:14:46.596454   62207 system_pods.go:89] "coredns-7c65d6cfc9-nzpdb" [acd2d488-301e-4d00-a17a-0e06ea5d9691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596462   62207 system_pods.go:89] "coredns-7c65d6cfc9-qrgr9" [31b611b3-d861-451f-8c17-30bed52994a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 18:14:46.596468   62207 system_pods.go:89] "etcd-no-preload-168587" [8d3dc146-eb39-4ed3-9409-63ea08fffd39] Running
	I0914 18:14:46.596473   62207 system_pods.go:89] "kube-apiserver-no-preload-168587" [9a68e520-b40c-42d2-9052-757c7c99a958] Running
	I0914 18:14:46.596477   62207 system_pods.go:89] "kube-controller-manager-no-preload-168587" [47d23e53-0f6b-45bd-8288-3101a35b1827] Running
	I0914 18:14:46.596480   62207 system_pods.go:89] "kube-proxy-xdj6b" [d3080090-4f40-49e1-9c3e-ccceb37cc952] Running
	I0914 18:14:46.596483   62207 system_pods.go:89] "kube-scheduler-no-preload-168587" [e7fe55f8-d203-4de2-9594-15f858453434] Running
	I0914 18:14:46.596502   62207 system_pods.go:89] "metrics-server-6867b74b74-cmcz4" [24cea6b3-a107-4110-ac29-88389b55bbdc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 18:14:46.596509   62207 system_pods.go:89] "storage-provisioner" [57b6d85d-fc04-42da-9452-3f24824b8377] Running
	I0914 18:14:46.596517   62207 system_pods.go:126] duration metric: took 202.067078ms to wait for k8s-apps to be running ...
	I0914 18:14:46.596527   62207 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 18:14:46.596571   62207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:46.611796   62207 system_svc.go:56] duration metric: took 15.259464ms WaitForService to wait for kubelet
	I0914 18:14:46.611837   62207 kubeadm.go:582] duration metric: took 7.868833105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:14:46.611858   62207 node_conditions.go:102] verifying NodePressure condition ...
	I0914 18:14:46.794731   62207 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0914 18:14:46.794758   62207 node_conditions.go:123] node cpu capacity is 2
	I0914 18:14:46.794767   62207 node_conditions.go:105] duration metric: took 182.903835ms to run NodePressure ...
	I0914 18:14:46.794777   62207 start.go:241] waiting for startup goroutines ...
	I0914 18:14:46.794783   62207 start.go:246] waiting for cluster config update ...
	I0914 18:14:46.794793   62207 start.go:255] writing updated cluster config ...
	I0914 18:14:46.795051   62207 ssh_runner.go:195] Run: rm -f paused
	I0914 18:14:46.845803   62207 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 18:14:46.847399   62207 out.go:177] * Done! kubectl is now configured to use "no-preload-168587" cluster and "default" namespace by default
	I0914 18:14:53.509475   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:14:53.509669   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:14:53.509699   62996 kubeadm.go:310] 
	I0914 18:14:53.509778   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:14:53.509838   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:14:53.509849   62996 kubeadm.go:310] 
	I0914 18:14:53.509901   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:14:53.509966   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:14:53.510115   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:14:53.510126   62996 kubeadm.go:310] 
	I0914 18:14:53.510293   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:14:53.510346   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:14:53.510386   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:14:53.510394   62996 kubeadm.go:310] 
	I0914 18:14:53.510487   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:14:53.510567   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:14:53.510582   62996 kubeadm.go:310] 
	I0914 18:14:53.510758   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:14:53.510852   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:14:53.510953   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:14:53.511074   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:14:53.511085   62996 kubeadm.go:310] 
	I0914 18:14:53.511727   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:14:53.511824   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:14:53.511904   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0914 18:14:53.512051   62996 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0914 18:14:53.512098   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0914 18:14:53.965324   62996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:14:53.982028   62996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 18:14:53.993640   62996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 18:14:53.993674   62996 kubeadm.go:157] found existing configuration files:
	
	I0914 18:14:53.993745   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 18:14:54.004600   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 18:14:54.004669   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 18:14:54.015315   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 18:14:54.025727   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 18:14:54.025795   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 18:14:54.035619   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.044936   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 18:14:54.045003   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 18:14:54.055091   62996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 18:14:54.064576   62996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 18:14:54.064630   62996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 18:14:54.074698   62996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 18:14:54.143625   62996 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0914 18:14:54.143712   62996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 18:14:54.289361   62996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 18:14:54.289488   62996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 18:14:54.289629   62996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0914 18:14:54.479052   62996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 18:14:54.481175   62996 out.go:235]   - Generating certificates and keys ...
	I0914 18:14:54.481284   62996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 18:14:54.481391   62996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 18:14:54.481469   62996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 18:14:54.481522   62996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0914 18:14:54.481585   62996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0914 18:14:54.481631   62996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0914 18:14:54.481685   62996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0914 18:14:54.481737   62996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0914 18:14:54.481829   62996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 18:14:54.481926   62996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 18:14:54.481977   62996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0914 18:14:54.482063   62996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 18:14:54.695002   62996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 18:14:54.850598   62996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 18:14:54.964590   62996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 18:14:55.108047   62996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 18:14:55.126530   62996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 18:14:55.128690   62996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 18:14:55.128760   62996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 18:14:55.272139   62996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 18:14:55.274365   62996 out.go:235]   - Booting up control plane ...
	I0914 18:14:55.274529   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 18:14:55.279796   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 18:14:55.281097   62996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 18:14:55.281998   62996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 18:14:55.285620   62996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0914 18:15:35.288294   62996 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0914 18:15:35.288485   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:35.288693   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:40.289032   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:40.289327   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:15:50.289795   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:15:50.290023   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:10.291201   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:10.291427   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292253   62996 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0914 18:16:50.292481   62996 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0914 18:16:50.292503   62996 kubeadm.go:310] 
	I0914 18:16:50.292554   62996 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0914 18:16:50.292606   62996 kubeadm.go:310] 		timed out waiting for the condition
	I0914 18:16:50.292615   62996 kubeadm.go:310] 
	I0914 18:16:50.292654   62996 kubeadm.go:310] 	This error is likely caused by:
	I0914 18:16:50.292685   62996 kubeadm.go:310] 		- The kubelet is not running
	I0914 18:16:50.292773   62996 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0914 18:16:50.292780   62996 kubeadm.go:310] 
	I0914 18:16:50.292912   62996 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0914 18:16:50.292953   62996 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0914 18:16:50.292993   62996 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0914 18:16:50.293022   62996 kubeadm.go:310] 
	I0914 18:16:50.293176   62996 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0914 18:16:50.293293   62996 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0914 18:16:50.293308   62996 kubeadm.go:310] 
	I0914 18:16:50.293470   62996 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0914 18:16:50.293602   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0914 18:16:50.293709   62996 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0914 18:16:50.293810   62996 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0914 18:16:50.293830   62996 kubeadm.go:310] 
	I0914 18:16:50.294646   62996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 18:16:50.294759   62996 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0914 18:16:50.294871   62996 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0914 18:16:50.294910   62996 kubeadm.go:394] duration metric: took 7m56.82551772s to StartCluster
	I0914 18:16:50.294961   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:16:50.295021   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:16:50.341859   62996 cri.go:89] found id: ""
	I0914 18:16:50.341894   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.341908   62996 logs.go:278] No container was found matching "kube-apiserver"
	I0914 18:16:50.341916   62996 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 18:16:50.341983   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:16:50.380725   62996 cri.go:89] found id: ""
	I0914 18:16:50.380755   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.380766   62996 logs.go:278] No container was found matching "etcd"
	I0914 18:16:50.380773   62996 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 18:16:50.380842   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:16:50.415978   62996 cri.go:89] found id: ""
	I0914 18:16:50.416003   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.416012   62996 logs.go:278] No container was found matching "coredns"
	I0914 18:16:50.416017   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:16:50.416065   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:16:50.452823   62996 cri.go:89] found id: ""
	I0914 18:16:50.452859   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.452872   62996 logs.go:278] No container was found matching "kube-scheduler"
	I0914 18:16:50.452882   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:16:50.452939   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:16:50.487240   62996 cri.go:89] found id: ""
	I0914 18:16:50.487272   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.487283   62996 logs.go:278] No container was found matching "kube-proxy"
	I0914 18:16:50.487291   62996 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:16:50.487353   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:16:50.520690   62996 cri.go:89] found id: ""
	I0914 18:16:50.520719   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.520728   62996 logs.go:278] No container was found matching "kube-controller-manager"
	I0914 18:16:50.520735   62996 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 18:16:50.520783   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:16:50.558150   62996 cri.go:89] found id: ""
	I0914 18:16:50.558191   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.558200   62996 logs.go:278] No container was found matching "kindnet"
	I0914 18:16:50.558206   62996 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:16:50.558266   62996 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:16:50.595843   62996 cri.go:89] found id: ""
	I0914 18:16:50.595879   62996 logs.go:276] 0 containers: []
	W0914 18:16:50.595893   62996 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0914 18:16:50.595905   62996 logs.go:123] Gathering logs for kubelet ...
	I0914 18:16:50.595920   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 18:16:50.650623   62996 logs.go:123] Gathering logs for dmesg ...
	I0914 18:16:50.650659   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:16:50.664991   62996 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:16:50.665018   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0914 18:16:50.747876   62996 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0914 18:16:50.747899   62996 logs.go:123] Gathering logs for CRI-O ...
	I0914 18:16:50.747915   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 18:16:50.849314   62996 logs.go:123] Gathering logs for container status ...
	I0914 18:16:50.849354   62996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0914 18:16:50.889101   62996 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0914 18:16:50.889181   62996 out.go:270] * 
	W0914 18:16:50.889263   62996 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.889287   62996 out.go:270] * 
	W0914 18:16:50.890531   62996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:16:50.893666   62996 out.go:201] 
	W0914 18:16:50.894916   62996 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0914 18:16:50.894958   62996 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0914 18:16:50.894991   62996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0914 18:16:50.896591   62996 out.go:201] 
	
	
	==> CRI-O <==
	Sep 14 18:28:47 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:47.999241389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338527999205124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b42285f-4dfc-4a4a-99c3-2ddb47240269 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:47 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:47.999677519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c62f9889-4756-4cd9-ad69-5616ceb13935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:47 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:47.999725857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c62f9889-4756-4cd9-ad69-5616ceb13935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:47 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:47.999757166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c62f9889-4756-4cd9-ad69-5616ceb13935 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.031428857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e990f047-93e8-4310-96eb-5ba03b718fdd name=/runtime.v1.RuntimeService/Version
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.031504777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e990f047-93e8-4310-96eb-5ba03b718fdd name=/runtime.v1.RuntimeService/Version
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.032846806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2efc4bdf-a1ec-4610-9cbf-3bb7fdf74ec3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.033464092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338528033422003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2efc4bdf-a1ec-4610-9cbf-3bb7fdf74ec3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.034129245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87c99193-edbd-4e66-8773-21e87d5fcde8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.034180617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87c99193-edbd-4e66-8773-21e87d5fcde8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.034211798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=87c99193-edbd-4e66-8773-21e87d5fcde8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.067389132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67f72732-095b-4799-9571-c536d4f8e86a name=/runtime.v1.RuntimeService/Version
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.067483910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67f72732-095b-4799-9571-c536d4f8e86a name=/runtime.v1.RuntimeService/Version
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.071245660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b30c396-083a-4ef0-9673-e801cfc0a09f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.071701628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338528071663093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b30c396-083a-4ef0-9673-e801cfc0a09f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.072524664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72510943-2ecb-4184-89a8-f322a8139345 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.072619421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72510943-2ecb-4184-89a8-f322a8139345 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.072697980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=72510943-2ecb-4184-89a8-f322a8139345 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.109035854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76376292-f01a-4ba3-8de0-e313139f1bc2 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.109156150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76376292-f01a-4ba3-8de0-e313139f1bc2 name=/runtime.v1.RuntimeService/Version
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.110606666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97fe470e-f2b2-412e-bc5b-362e37e2f3e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.111267161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726338528111233584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97fe470e-f2b2-412e-bc5b-362e37e2f3e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.112062215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35c6c16e-50a7-4572-8024-b4e9a916a702 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.112146996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35c6c16e-50a7-4572-8024-b4e9a916a702 name=/runtime.v1.RuntimeService/ListContainers
	Sep 14 18:28:48 old-k8s-version-556121 crio[630]: time="2024-09-14 18:28:48.112197352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=35c6c16e-50a7-4572-8024-b4e9a916a702 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep14 18:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.818277] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.926515] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.580247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.280362] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.069665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058885] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.193036] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.156845] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.249799] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.598174] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.066263] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.657757] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[Sep14 18:09] kauditd_printk_skb: 46 callbacks suppressed
	[Sep14 18:12] systemd-fstab-generator[5028]: Ignoring "noauto" option for root device
	[Sep14 18:14] systemd-fstab-generator[5317]: Ignoring "noauto" option for root device
	[  +0.068697] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:28:48 up 20 min,  0 users,  load average: 0.09, 0.06, 0.04
	Linux old-k8s-version-556121 5.10.207 #1 SMP Sat Sep 14 07:35:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000bfeb90, 0xc000b0e2a0, 0x23, 0xc000c86580)
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: created by internal/singleflight.(*Group).DoChan
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: goroutine 170 [syscall]:
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: net._C2func_getaddrinfo(0xc000bb6580, 0x0, 0xc000bfdef0, 0xc000ae4198, 0x0, 0x0, 0x0)
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         _cgo_gotypes.go:94 +0x55
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: net.cgoLookupIPCNAME.func1(0xc000bb6580, 0x20, 0x20, 0xc000bfdef0, 0xc000ae4198, 0x4e4a5a0, 0xc00059fea0, 0x57a492)
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000b0e270, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: net.cgoIPLookup(0xc000c859e0, 0x48ab5d6, 0x3, 0xc000b0e270, 0x1f)
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]: created by net.cgoLookupIP
	Sep 14 18:28:43 old-k8s-version-556121 kubelet[6844]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Sep 14 18:28:43 old-k8s-version-556121 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 14 18:28:43 old-k8s-version-556121 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 14 18:28:44 old-k8s-version-556121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 144.
	Sep 14 18:28:44 old-k8s-version-556121 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 14 18:28:44 old-k8s-version-556121 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 14 18:28:44 old-k8s-version-556121 kubelet[6853]: I0914 18:28:44.480993    6853 server.go:416] Version: v1.20.0
	Sep 14 18:28:44 old-k8s-version-556121 kubelet[6853]: I0914 18:28:44.481331    6853 server.go:837] Client rotation is on, will bootstrap in background
	Sep 14 18:28:44 old-k8s-version-556121 kubelet[6853]: I0914 18:28:44.483608    6853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 14 18:28:44 old-k8s-version-556121 kubelet[6853]: W0914 18:28:44.484566    6853 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 14 18:28:44 old-k8s-version-556121 kubelet[6853]: I0914 18:28:44.484809    6853 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 2 (222.716074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-556121" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (171.47s)

                                                
                                    

Test pass (254/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.03
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 12.53
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 111
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 137.63
31 TestAddons/serial/GCPAuth/Namespaces 0.16
35 TestAddons/parallel/InspektorGadget 11.76
37 TestAddons/parallel/HelmTiller 13.17
39 TestAddons/parallel/CSI 57.31
40 TestAddons/parallel/Headlamp 19.59
41 TestAddons/parallel/CloudSpanner 5.65
42 TestAddons/parallel/LocalPath 56.55
43 TestAddons/parallel/NvidiaDevicePlugin 5.57
44 TestAddons/parallel/Yakd 12.25
45 TestAddons/StoppedEnableDisable 7.56
46 TestCertOptions 64.24
47 TestCertExpiration 261.61
49 TestForceSystemdFlag 45.25
50 TestForceSystemdEnv 44.59
52 TestKVMDriverInstallOrUpdate 4.5
56 TestErrorSpam/setup 43.09
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.55
60 TestErrorSpam/unpause 1.7
61 TestErrorSpam/stop 5.45
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.39
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 53.08
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
73 TestFunctional/serial/CacheCmd/cache/add_local 2.1
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 36.71
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.35
84 TestFunctional/serial/LogsFileCmd 1.42
85 TestFunctional/serial/InvalidService 4.44
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 15.42
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.17
95 TestFunctional/parallel/ServiceCmdConnect 6.55
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 46.15
99 TestFunctional/parallel/SSHCmd 0.47
100 TestFunctional/parallel/CpCmd 1.3
101 TestFunctional/parallel/MySQL 25.53
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.18
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
111 TestFunctional/parallel/License 1.12
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
114 TestFunctional/parallel/ProfileCmd/profile_list 0.3
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
116 TestFunctional/parallel/MountCmd/any-port 9.51
117 TestFunctional/parallel/MountCmd/specific-port 1.96
118 TestFunctional/parallel/ServiceCmd/List 0.44
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
121 TestFunctional/parallel/ServiceCmd/Format 0.76
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
123 TestFunctional/parallel/ServiceCmd/URL 0.4
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 0.64
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.54
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.6
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.46
130 TestFunctional/parallel/ImageCommands/ImageBuild 11.06
131 TestFunctional/parallel/ImageCommands/Setup 1.76
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.9
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.67
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 193.91
158 TestMultiControlPlane/serial/DeployApp 6.79
159 TestMultiControlPlane/serial/PingHostFromPods 1.2
160 TestMultiControlPlane/serial/AddWorkerNode 82.23
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.63
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.81
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
172 TestMultiControlPlane/serial/RestartCluster 316.4
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 76.77
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 78.19
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.67
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.6
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.67
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 87.42
211 TestMountStart/serial/StartWithMountFirst 24.82
212 TestMountStart/serial/VerifyMountFirst 0.38
213 TestMountStart/serial/StartWithMountSecond 27.2
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 22.02
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 109.59
223 TestMultiNode/serial/DeployApp2Nodes 6.6
224 TestMultiNode/serial/PingHostFrom2Pods 0.79
225 TestMultiNode/serial/AddNode 50.1
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 7.1
229 TestMultiNode/serial/StopNode 2.2
230 TestMultiNode/serial/StartAfterStop 38.98
232 TestMultiNode/serial/DeleteNode 2.23
234 TestMultiNode/serial/RestartMultiNode 200.53
235 TestMultiNode/serial/ValidateNameConflict 40.64
242 TestScheduledStopUnix 109.52
246 TestRunningBinaryUpgrade 211.95
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 90.08
261 TestPause/serial/Start 126.11
262 TestNoKubernetes/serial/StartWithStopK8s 39.22
263 TestNoKubernetes/serial/Start 52.3
264 TestPause/serial/SecondStartNoReconfiguration 48.6
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
266 TestNoKubernetes/serial/ProfileList 34.34
267 TestNoKubernetes/serial/Stop 1.3
268 TestNoKubernetes/serial/StartNoArgs 39.46
269 TestPause/serial/Pause 0.75
270 TestPause/serial/VerifyStatus 0.24
271 TestPause/serial/Unpause 0.66
272 TestPause/serial/PauseAgain 0.77
273 TestPause/serial/DeletePaused 0.83
274 TestPause/serial/VerifyDeletedResources 0.29
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
283 TestNetworkPlugins/group/false 3.01
287 TestStoppedBinaryUpgrade/Setup 2.26
288 TestStoppedBinaryUpgrade/Upgrade 131.26
292 TestStartStop/group/no-preload/serial/FirstStart 71.25
293 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
295 TestStartStop/group/embed-certs/serial/FirstStart 81.79
296 TestStartStop/group/no-preload/serial/DeployApp 11.3
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
299 TestStartStop/group/embed-certs/serial/DeployApp 9.28
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.62
307 TestStartStop/group/no-preload/serial/SecondStart 680.12
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
310 TestStartStop/group/embed-certs/serial/SecondStart 572.27
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
313 TestStartStop/group/old-k8s-version/serial/Stop 4.29
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 423.8
327 TestStartStop/group/newest-cni/serial/FirstStart 47.47
328 TestNetworkPlugins/group/auto/Start 83.56
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
331 TestStartStop/group/newest-cni/serial/Stop 10.48
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
333 TestStartStop/group/newest-cni/serial/SecondStart 45.09
334 TestNetworkPlugins/group/kindnet/Start 76.26
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
338 TestStartStop/group/newest-cni/serial/Pause 2.62
339 TestNetworkPlugins/group/calico/Start 85.68
340 TestNetworkPlugins/group/auto/KubeletFlags 0.22
341 TestNetworkPlugins/group/auto/NetCatPod 10.25
342 TestNetworkPlugins/group/auto/DNS 0.16
343 TestNetworkPlugins/group/auto/Localhost 0.14
344 TestNetworkPlugins/group/auto/HairPin 0.14
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/custom-flannel/Start 60.07
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
348 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
349 TestNetworkPlugins/group/kindnet/DNS 0.18
350 TestNetworkPlugins/group/kindnet/Localhost 0.16
351 TestNetworkPlugins/group/kindnet/HairPin 0.16
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.3
354 TestNetworkPlugins/group/enable-default-cni/Start 89.79
355 TestNetworkPlugins/group/flannel/Start 96.03
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.2
358 TestNetworkPlugins/group/calico/NetCatPod 10.22
359 TestNetworkPlugins/group/calico/DNS 0.18
360 TestNetworkPlugins/group/calico/Localhost 0.17
361 TestNetworkPlugins/group/calico/HairPin 0.15
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.13
364 TestNetworkPlugins/group/bridge/Start 65.95
365 TestNetworkPlugins/group/custom-flannel/DNS 0.19
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
375 TestNetworkPlugins/group/flannel/NetCatPod 11.24
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
377 TestNetworkPlugins/group/bridge/NetCatPod 11.54
378 TestNetworkPlugins/group/flannel/DNS 0.16
379 TestNetworkPlugins/group/flannel/Localhost 0.12
380 TestNetworkPlugins/group/flannel/HairPin 0.12
381 TestNetworkPlugins/group/bridge/DNS 0.15
382 TestNetworkPlugins/group/bridge/Localhost 0.15
383 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (28.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-119677 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-119677 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.032023193s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-119677
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-119677: exit status 85 (59.292955ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |          |
	|         | -p download-only-119677        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:43:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:43:45.552391   16028 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:43:45.552491   16028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:45.552495   16028 out.go:358] Setting ErrFile to fd 2...
	I0914 16:43:45.552500   16028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:45.552666   16028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	W0914 16:43:45.552778   16028 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19643-8806/.minikube/config/config.json: open /home/jenkins/minikube-integration/19643-8806/.minikube/config/config.json: no such file or directory
	I0914 16:43:45.553325   16028 out.go:352] Setting JSON to true
	I0914 16:43:45.554246   16028 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1570,"bootTime":1726330656,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:43:45.554376   16028 start.go:139] virtualization: kvm guest
	I0914 16:43:45.556972   16028 out.go:97] [download-only-119677] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0914 16:43:45.557094   16028 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 16:43:45.557144   16028 notify.go:220] Checking for updates...
	I0914 16:43:45.558396   16028 out.go:169] MINIKUBE_LOCATION=19643
	I0914 16:43:45.559922   16028 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:43:45.561487   16028 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:43:45.563089   16028 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:43:45.564498   16028 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0914 16:43:45.567064   16028 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 16:43:45.567300   16028 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:43:45.668370   16028 out.go:97] Using the kvm2 driver based on user configuration
	I0914 16:43:45.668399   16028 start.go:297] selected driver: kvm2
	I0914 16:43:45.668407   16028 start.go:901] validating driver "kvm2" against <nil>
	I0914 16:43:45.668837   16028 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:43:45.668990   16028 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 16:43:45.684825   16028 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 16:43:45.684883   16028 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:43:45.685416   16028 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0914 16:43:45.685563   16028 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 16:43:45.685591   16028 cni.go:84] Creating CNI manager for ""
	I0914 16:43:45.685632   16028 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:43:45.685642   16028 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:43:45.685692   16028 start.go:340] cluster config:
	{Name:download-only-119677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-119677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:43:45.685861   16028 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:43:45.688011   16028 out.go:97] Downloading VM boot image ...
	I0914 16:43:45.688056   16028 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/iso/amd64/minikube-v1.34.0-1726281733-19643-amd64.iso
	I0914 16:43:59.463900   16028 out.go:97] Starting "download-only-119677" primary control-plane node in "download-only-119677" cluster
	I0914 16:43:59.463920   16028 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 16:43:59.569309   16028 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0914 16:43:59.569359   16028 cache.go:56] Caching tarball of preloaded images
	I0914 16:43:59.569538   16028 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 16:43:59.571675   16028 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 16:43:59.571707   16028 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0914 16:43:59.670685   16028 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-119677 host does not exist
	  To start a cluster, run: "minikube start -p download-only-119677"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-119677
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-357716 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-357716 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.529685564s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-357716
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-357716: exit status 85 (59.125008ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p download-only-119677        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| delete  | -p download-only-119677        | download-only-119677 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | -o=json --download-only        | download-only-357716 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | -p download-only-357716        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:44:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:44:13.903687   16282 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:44:13.903919   16282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:13.903928   16282 out.go:358] Setting ErrFile to fd 2...
	I0914 16:44:13.903933   16282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:13.904112   16282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 16:44:13.904654   16282 out.go:352] Setting JSON to true
	I0914 16:44:13.905443   16282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1598,"bootTime":1726330656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:44:13.905538   16282 start.go:139] virtualization: kvm guest
	I0914 16:44:13.907678   16282 out.go:97] [download-only-357716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 16:44:13.907797   16282 notify.go:220] Checking for updates...
	I0914 16:44:13.909196   16282 out.go:169] MINIKUBE_LOCATION=19643
	I0914 16:44:13.910514   16282 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:44:13.911801   16282 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 16:44:13.912980   16282 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 16:44:13.914147   16282 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0914 16:44:13.916846   16282 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 16:44:13.917069   16282 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:44:13.949903   16282 out.go:97] Using the kvm2 driver based on user configuration
	I0914 16:44:13.949933   16282 start.go:297] selected driver: kvm2
	I0914 16:44:13.949938   16282 start.go:901] validating driver "kvm2" against <nil>
	I0914 16:44:13.950296   16282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:13.950379   16282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19643-8806/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 16:44:13.965508   16282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0914 16:44:13.965559   16282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:44:13.966046   16282 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0914 16:44:13.966206   16282 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 16:44:13.966239   16282 cni.go:84] Creating CNI manager for ""
	I0914 16:44:13.966283   16282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0914 16:44:13.966293   16282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:44:13.966355   16282 start.go:340] cluster config:
	{Name:download-only-357716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-357716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:13.966447   16282 iso.go:125] acquiring lock: {Name:mk538f7a4abc7956f82511183088cbfac1e66ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:13.968325   16282 out.go:97] Starting "download-only-357716" primary control-plane node in "download-only-357716" cluster
	I0914 16:44:13.968349   16282 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:14.065047   16282 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0914 16:44:14.065083   16282 cache.go:56] Caching tarball of preloaded images
	I0914 16:44:14.065269   16282 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 16:44:14.067349   16282 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 16:44:14.067371   16282 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0914 16:44:14.164541   16282 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19643-8806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-357716 host does not exist
	  To start a cluster, run: "minikube start -p download-only-357716"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-357716
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-539617 --alsologtostderr --binary-mirror http://127.0.0.1:35769 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-539617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-539617
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (111s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-696123 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-696123 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m50.204455689s)
helpers_test.go:175: Cleaning up "offline-crio-696123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-696123
--- PASS: TestOffline (111.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-996992
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-996992: exit status 85 (54.485404ms)

                                                
                                                
-- stdout --
	* Profile "addons-996992" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-996992"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-996992
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-996992: exit status 85 (54.715918ms)

                                                
                                                
-- stdout --
	* Profile "addons-996992" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-996992"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (137.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-996992 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-996992 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m17.634675617s)
--- PASS: TestAddons/Setup (137.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-996992 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-996992 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h8s8m" [4ef10d47-f009-491e-a7b1-67ed74b1754b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004371813s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-996992
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-996992: (5.758865851s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.17s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.327613ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-z2hbn" [62ae1fe8-58f5-422e-b2b8-abcdaf2e7693] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005130216s
addons_test.go:475: (dbg) Run:  kubectl --context addons-996992 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-996992 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.767185596s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable helm-tiller --alsologtostderr -v=1: (1.396507153s)
--- PASS: TestAddons/parallel/HelmTiller (13.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.655324ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-996992 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-996992 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8e53a335-a631-45f1-a0a8-ab4a6be6f3d7] Pending
helpers_test.go:344: "task-pv-pod" [8e53a335-a631-45f1-a0a8-ab4a6be6f3d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8e53a335-a631-45f1-a0a8-ab4a6be6f3d7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004014173s
addons_test.go:590: (dbg) Run:  kubectl --context addons-996992 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-996992 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-996992 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-996992 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-996992 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-996992 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-996992 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b259e885-ff70-4428-b444-5278775740a7] Pending
helpers_test.go:344: "task-pv-pod-restore" [b259e885-ff70-4428-b444-5278775740a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b259e885-ff70-4428-b444-5278775740a7] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004108414s
addons_test.go:632: (dbg) Run:  kubectl --context addons-996992 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-996992 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-996992 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.784306796s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-996992 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-zjhfp" [5c6c90aa-35d8-48a2-b0d0-eb4bf05b23e5] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-zjhfp" [5c6c90aa-35d8-48a2-b0d0-eb4bf05b23e5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-zjhfp" [5c6c90aa-35d8-48a2-b0d0-eb4bf05b23e5] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004432498s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable headlamp --alsologtostderr -v=1: (5.685089644s)
--- PASS: TestAddons/parallel/Headlamp (19.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-k7sbs" [d686895c-fee3-4221-b5f8-4bf0cca257e7] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004323474s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-996992
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-996992 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-996992 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [02c1a415-c265-4d13-a217-483bd4e8a3f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [02c1a415-c265-4d13-a217-483bd4e8a3f1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [02c1a415-c265-4d13-a217-483bd4e8a3f1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004061616s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-996992 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 ssh "cat /opt/local-path-provisioner/pvc-065cb3df-7fd3-4993-9a34-5c093c32d00a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-996992 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-996992 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.741637167s)
--- PASS: TestAddons/parallel/LocalPath (56.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-v9pgt" [3f1896cc-99c7-4c98-8b64-9e40965c553b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003689523s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-996992
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-6w892" [345e6c36-623a-477e-9c8c-38b577dc887d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004114255s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-996992 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-996992 addons disable yakd --alsologtostderr -v=1: (6.248677378s)
--- PASS: TestAddons/parallel/Yakd (12.25s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-996992
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-996992: (7.29113291s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-996992
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-996992
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-996992
--- PASS: TestAddons/StoppedEnableDisable (7.56s)

                                                
                                    
x
+
TestCertOptions (64.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-476980 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-476980 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m2.817050662s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-476980 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-476980 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-476980 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-476980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-476980
--- PASS: TestCertOptions (64.24s)

                                                
                                    
x
+
TestCertExpiration (261.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-724454 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-724454 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.647937218s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-724454 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-724454 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (19.940283516s)
helpers_test.go:175: Cleaning up "cert-expiration-724454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-724454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-724454: (1.019066945s)
--- PASS: TestCertExpiration (261.61s)

                                                
                                    
x
+
TestForceSystemdFlag (45.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-213182 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-213182 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.07781169s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-213182 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-213182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-213182
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-213182: (1.946752182s)
--- PASS: TestForceSystemdFlag (45.25s)

                                                
                                    
x
+
TestForceSystemdEnv (44.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-738857 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-738857 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.60960694s)
helpers_test.go:175: Cleaning up "force-systemd-env-738857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-738857
--- PASS: TestForceSystemdEnv (44.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.50s)

                                                
                                    
x
+
TestErrorSpam/setup (43.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-616235 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-616235 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-616235 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-616235 --driver=kvm2  --container-runtime=crio: (43.085881321s)
--- PASS: TestErrorSpam/setup (43.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 stop: (1.619722663s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 stop: (1.809469934s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-616235 --log_dir /tmp/nospam-616235 stop: (2.017802331s)
--- PASS: TestErrorSpam/stop (5.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19643-8806/.minikube/files/etc/test/nested/copy/16016/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764671 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0914 17:01:45.625319   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:45.632009   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:45.643333   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:45.664718   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:45.706138   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:45.787591   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:45.949187   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:46.270843   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:46.912893   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:48.194318   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:50.757324   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:01:55.879248   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:06.121368   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-764671 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (47.393743967s)
--- PASS: TestFunctional/serial/StartWithProxy (47.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764671 --alsologtostderr -v=8
E0914 17:02:26.602886   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:03:07.565249   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-764671 --alsologtostderr -v=8: (53.082504859s)
functional_test.go:663: soft start took 53.083280493s for "functional-764671" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-764671 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 cache add registry.k8s.io/pause:3.1: (1.246936113s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 cache add registry.k8s.io/pause:3.3: (1.270711125s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 cache add registry.k8s.io/pause:latest: (1.265133714s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-764671 /tmp/TestFunctionalserialCacheCmdcacheadd_local252180883/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cache add minikube-local-cache-test:functional-764671
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 cache add minikube-local-cache-test:functional-764671: (1.768524777s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cache delete minikube-local-cache-test:functional-764671
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-764671
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (202.824681ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 cache reload: (1.055872426s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 kubectl -- --context functional-764671 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-764671 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764671 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-764671 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.708889964s)
functional_test.go:761: restart took 36.708993008s for "functional-764671" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-764671 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 logs: (1.350815675s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 logs --file /tmp/TestFunctionalserialLogsFileCmd3963838819/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 logs --file /tmp/TestFunctionalserialLogsFileCmd3963838819/001/logs.txt: (1.419872202s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-764671 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-764671
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-764671: exit status 115 (275.816524ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.215:32385 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-764671 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 config get cpus: exit status 14 (53.797808ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 config get cpus: exit status 14 (44.041367ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-764671 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-764671 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25335: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764671 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-764671 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.46813ms)

                                                
                                                
-- stdout --
	* [functional-764671] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:04:07.610514   25235 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:04:07.610617   25235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:07.610625   25235 out.go:358] Setting ErrFile to fd 2...
	I0914 17:04:07.610630   25235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:07.610827   25235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:04:07.611327   25235 out.go:352] Setting JSON to false
	I0914 17:04:07.612240   25235 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2792,"bootTime":1726330656,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:04:07.612337   25235 start.go:139] virtualization: kvm guest
	I0914 17:04:07.614444   25235 out.go:177] * [functional-764671] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:04:07.615943   25235 notify.go:220] Checking for updates...
	I0914 17:04:07.618182   25235 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:04:07.619675   25235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:04:07.621100   25235 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:04:07.622475   25235 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:07.624111   25235 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:04:07.625342   25235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:04:07.626870   25235 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:04:07.627220   25235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:04:07.627280   25235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:04:07.645123   25235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0914 17:04:07.645663   25235 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:04:07.646419   25235 main.go:141] libmachine: Using API Version  1
	I0914 17:04:07.646461   25235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:04:07.647509   25235 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:04:07.647731   25235 main.go:141] libmachine: (functional-764671) Calling .DriverName
	I0914 17:04:07.648006   25235 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:04:07.648444   25235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:04:07.648494   25235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:04:07.664174   25235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0914 17:04:07.664691   25235 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:04:07.665223   25235 main.go:141] libmachine: Using API Version  1
	I0914 17:04:07.665251   25235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:04:07.665664   25235 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:04:07.665886   25235 main.go:141] libmachine: (functional-764671) Calling .DriverName
	I0914 17:04:07.701561   25235 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 17:04:07.702678   25235 start.go:297] selected driver: kvm2
	I0914 17:04:07.702694   25235 start.go:901] validating driver "kvm2" against &{Name:functional-764671 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-764671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:04:07.702838   25235 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:04:07.704852   25235 out.go:201] 
	W0914 17:04:07.706147   25235 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 17:04:07.707514   25235 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764671 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-764671 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-764671 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.535324ms)

                                                
                                                
-- stdout --
	* [functional-764671] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:04:07.467865   25203 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:04:07.468112   25203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:07.468121   25203 out.go:358] Setting ErrFile to fd 2...
	I0914 17:04:07.468125   25203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:04:07.468383   25203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:04:07.468911   25203 out.go:352] Setting JSON to false
	I0914 17:04:07.469844   25203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2791,"bootTime":1726330656,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:04:07.469943   25203 start.go:139] virtualization: kvm guest
	I0914 17:04:07.472524   25203 out.go:177] * [functional-764671] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0914 17:04:07.474302   25203 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:04:07.474292   25203 notify.go:220] Checking for updates...
	I0914 17:04:07.477012   25203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:04:07.478252   25203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:04:07.479601   25203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:04:07.481292   25203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:04:07.482647   25203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:04:07.484400   25203 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:04:07.484780   25203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:04:07.484851   25203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:04:07.500450   25203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42541
	I0914 17:04:07.500966   25203 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:04:07.501608   25203 main.go:141] libmachine: Using API Version  1
	I0914 17:04:07.501634   25203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:04:07.501966   25203 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:04:07.502264   25203 main.go:141] libmachine: (functional-764671) Calling .DriverName
	I0914 17:04:07.502483   25203 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:04:07.502787   25203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:04:07.502821   25203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:04:07.518669   25203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40449
	I0914 17:04:07.519156   25203 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:04:07.519691   25203 main.go:141] libmachine: Using API Version  1
	I0914 17:04:07.519718   25203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:04:07.520061   25203 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:04:07.520230   25203 main.go:141] libmachine: (functional-764671) Calling .DriverName
	I0914 17:04:07.552565   25203 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0914 17:04:07.553675   25203 start.go:297] selected driver: kvm2
	I0914 17:04:07.553691   25203 start.go:901] validating driver "kvm2" against &{Name:functional-764671 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19643/minikube-v1.34.0-1726281733-19643-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-764671 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:04:07.553824   25203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:04:07.556021   25203 out.go:201] 
	W0914 17:04:07.557351   25203 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 17:04:07.558539   25203 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-764671 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-764671 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jw5kq" [15c4b02d-9dc0-4cba-a4c0-e70f2ecd0be2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jw5kq" [15c4b02d-9dc0-4cba-a4c0-e70f2ecd0be2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.005697875s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.215:30322
functional_test.go:1675: http://192.168.39.215:30322: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-jw5kq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.215:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.215:30322
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1545587e-be82-414e-921c-790ee5384d2e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003519332s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-764671 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-764671 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-764671 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-764671 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5e5eb4f7-ea5f-4542-aa39-8664cb2f26eb] Pending
helpers_test.go:344: "sp-pod" [5e5eb4f7-ea5f-4542-aa39-8664cb2f26eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5e5eb4f7-ea5f-4542-aa39-8664cb2f26eb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.085916433s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-764671 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-764671 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-764671 delete -f testdata/storage-provisioner/pod.yaml: (1.180980408s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-764671 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [44660a33-1d4d-4c3c-9c13-f96f40613fd9] Pending
helpers_test.go:344: "sp-pod" [44660a33-1d4d-4c3c-9c13-f96f40613fd9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [44660a33-1d4d-4c3c-9c13-f96f40613fd9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004729134s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-764671 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh -n functional-764671 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cp functional-764671:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2250508568/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh -n functional-764671 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh -n functional-764671 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-764671 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-hxc82" [4a3423e7-f148-4fbb-b5d9-6358d62642d9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-hxc82" [4a3423e7-f148-4fbb-b5d9-6358d62642d9] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004469422s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-764671 exec mysql-6cdb49bbb-hxc82 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-764671 exec mysql-6cdb49bbb-hxc82 -- mysql -ppassword -e "show databases;": exit status 1 (129.812686ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-764671 exec mysql-6cdb49bbb-hxc82 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16016/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /etc/test/nested/copy/16016/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16016.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /etc/ssl/certs/16016.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16016.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /usr/share/ca-certificates/16016.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/160162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /etc/ssl/certs/160162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/160162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /usr/share/ca-certificates/160162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-764671 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh "sudo systemctl is-active docker": exit status 1 (220.613211ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh "sudo systemctl is-active containerd": exit status 1 (193.66141ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.117756668s)
--- PASS: TestFunctional/parallel/License (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-764671 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-764671 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-pf2k4" [62a451e5-e39a-4487-aa54-65b760d9d3d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-pf2k4" [62a451e5-e39a-4487-aa54-65b760d9d3d9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003349671s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "249.541497ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.098717ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "247.82707ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "58.606488ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdany-port2780518349/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726333446184576222" to /tmp/TestFunctionalparallelMountCmdany-port2780518349/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726333446184576222" to /tmp/TestFunctionalparallelMountCmdany-port2780518349/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726333446184576222" to /tmp/TestFunctionalparallelMountCmdany-port2780518349/001/test-1726333446184576222
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.527962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 17:04 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 17:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 17:04 test-1726333446184576222
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh cat /mount-9p/test-1726333446184576222
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-764671 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f1174bcd-824b-4207-90f1-8ed8dc79525a] Pending
helpers_test.go:344: "busybox-mount" [f1174bcd-824b-4207-90f1-8ed8dc79525a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f1174bcd-824b-4207-90f1-8ed8dc79525a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f1174bcd-824b-4207-90f1-8ed8dc79525a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004473641s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-764671 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdany-port2780518349/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdspecific-port3255061670/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.677882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdspecific-port3255061670/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh "sudo umount -f /mount-9p": exit status 1 (293.932894ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-764671 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdspecific-port3255061670/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 service list -o json
functional_test.go:1494: Took "462.455461ms" to run "out/minikube-linux-amd64 -p functional-764671 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.215:30163
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3416641347/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3416641347/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3416641347/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T" /mount1: exit status 1 (250.535787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-764671 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3416641347/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3416641347/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-764671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3416641347/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.215:30163
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764671 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-764671
localhost/kicbase/echo-server:functional-764671
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764671 image ls --format short --alsologtostderr:
I0914 17:04:31.940681   27027 out.go:345] Setting OutFile to fd 1 ...
I0914 17:04:31.940937   27027 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:31.940951   27027 out.go:358] Setting ErrFile to fd 2...
I0914 17:04:31.940958   27027 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:31.941229   27027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
I0914 17:04:31.942039   27027 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:31.942206   27027 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:31.942779   27027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:31.942818   27027 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:31.961086   27027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
I0914 17:04:31.961531   27027 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:31.962181   27027 main.go:141] libmachine: Using API Version  1
I0914 17:04:31.962215   27027 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:31.962545   27027 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:31.962763   27027 main.go:141] libmachine: (functional-764671) Calling .GetState
I0914 17:04:31.964554   27027 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:31.964627   27027 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:31.979973   27027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
I0914 17:04:31.980303   27027 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:31.981183   27027 main.go:141] libmachine: Using API Version  1
I0914 17:04:31.981200   27027 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:31.981551   27027 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:31.981860   27027 main.go:141] libmachine: (functional-764671) Calling .DriverName
I0914 17:04:31.982059   27027 ssh_runner.go:195] Run: systemctl --version
I0914 17:04:31.982081   27027 main.go:141] libmachine: (functional-764671) Calling .GetSSHHostname
I0914 17:04:31.984900   27027 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:31.985364   27027 main.go:141] libmachine: (functional-764671) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:5e:d5", ip: ""} in network mk-functional-764671: {Iface:virbr1 ExpiryTime:2024-09-14 18:01:46 +0000 UTC Type:0 Mac:52:54:00:b8:5e:d5 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-764671 Clientid:01:52:54:00:b8:5e:d5}
I0914 17:04:31.985418   27027 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:31.985711   27027 main.go:141] libmachine: (functional-764671) Calling .GetSSHPort
I0914 17:04:31.986353   27027 main.go:141] libmachine: (functional-764671) Calling .GetSSHKeyPath
I0914 17:04:31.986551   27027 main.go:141] libmachine: (functional-764671) Calling .GetSSHUsername
I0914 17:04:31.986695   27027 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/functional-764671/id_rsa Username:docker}
I0914 17:04:32.133691   27027 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 17:04:32.424233   27027 main.go:141] libmachine: Making call to close driver server
I0914 17:04:32.424252   27027 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:32.424525   27027 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
I0914 17:04:32.424593   27027 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:32.424605   27027 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 17:04:32.424614   27027 main.go:141] libmachine: Making call to close driver server
I0914 17:04:32.424625   27027 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:32.424904   27027 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:32.424921   27027 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 17:04:32.424901   27027 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764671 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/kicbase/echo-server           | functional-764671  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-764671  | 5203a3b6c6722 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764671 image ls --format table --alsologtostderr:
I0914 17:04:32.930008   27147 out.go:345] Setting OutFile to fd 1 ...
I0914 17:04:32.930284   27147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:32.930295   27147 out.go:358] Setting ErrFile to fd 2...
I0914 17:04:32.930301   27147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:32.930469   27147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
I0914 17:04:32.931044   27147 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:32.931144   27147 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:32.931520   27147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:32.931562   27147 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:32.946538   27147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
I0914 17:04:32.947046   27147 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:32.947734   27147 main.go:141] libmachine: Using API Version  1
I0914 17:04:32.947763   27147 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:32.948097   27147 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:32.948300   27147 main.go:141] libmachine: (functional-764671) Calling .GetState
I0914 17:04:32.950919   27147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:32.950969   27147 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:32.965929   27147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
I0914 17:04:32.966337   27147 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:32.966835   27147 main.go:141] libmachine: Using API Version  1
I0914 17:04:32.966854   27147 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:32.967263   27147 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:32.967483   27147 main.go:141] libmachine: (functional-764671) Calling .DriverName
I0914 17:04:32.967716   27147 ssh_runner.go:195] Run: systemctl --version
I0914 17:04:32.967758   27147 main.go:141] libmachine: (functional-764671) Calling .GetSSHHostname
I0914 17:04:32.971087   27147 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:32.971457   27147 main.go:141] libmachine: (functional-764671) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:5e:d5", ip: ""} in network mk-functional-764671: {Iface:virbr1 ExpiryTime:2024-09-14 18:01:46 +0000 UTC Type:0 Mac:52:54:00:b8:5e:d5 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-764671 Clientid:01:52:54:00:b8:5e:d5}
I0914 17:04:32.971486   27147 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:32.971640   27147 main.go:141] libmachine: (functional-764671) Calling .GetSSHPort
I0914 17:04:32.971820   27147 main.go:141] libmachine: (functional-764671) Calling .GetSSHKeyPath
I0914 17:04:32.971972   27147 main.go:141] libmachine: (functional-764671) Calling .GetSSHUsername
I0914 17:04:32.972210   27147 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/functional-764671/id_rsa Username:docker}
I0914 17:04:33.115802   27147 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 17:04:33.470984   27147 main.go:141] libmachine: Making call to close driver server
I0914 17:04:33.471014   27147 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:33.471389   27147 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:33.471431   27147 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 17:04:33.471441   27147 main.go:141] libmachine: Making call to close driver server
I0914 17:04:33.471449   27147 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:33.471725   27147 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
I0914 17:04:33.471752   27147 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:33.471781   27147 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764671 image ls --format json --alsologtostderr:
[{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-764671"],"size":"4943877"},{"id":"5203a3b6c67221e23fb95f
9f026d90b5fd962439c00e326faebca9c330b779a8","repoDigests":["localhost/minikube-local-cache-test@sha256:5a2a02a6c7044de1ff26cbb803dd4d0fcb7486cb1c64a7b2455924e56f7f63f0"],"repoTags":["localhost/minikube-local-cache-test:functional-764671"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a
90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558"
,"repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de5
30d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":
["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433
f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764671 image ls --format json --alsologtostderr:
I0914 17:04:32.481819   27091 out.go:345] Setting OutFile to fd 1 ...
I0914 17:04:32.481955   27091 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:32.481965   27091 out.go:358] Setting ErrFile to fd 2...
I0914 17:04:32.481972   27091 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:32.482174   27091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
I0914 17:04:32.482766   27091 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:32.482916   27091 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:32.483314   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:32.483352   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:32.498828   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
I0914 17:04:32.499337   27091 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:32.499940   27091 main.go:141] libmachine: Using API Version  1
I0914 17:04:32.499962   27091 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:32.500298   27091 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:32.500486   27091 main.go:141] libmachine: (functional-764671) Calling .GetState
I0914 17:04:32.502599   27091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:32.502654   27091 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:32.517652   27091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
I0914 17:04:32.518099   27091 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:32.518628   27091 main.go:141] libmachine: Using API Version  1
I0914 17:04:32.518663   27091 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:32.518971   27091 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:32.519183   27091 main.go:141] libmachine: (functional-764671) Calling .DriverName
I0914 17:04:32.519387   27091 ssh_runner.go:195] Run: systemctl --version
I0914 17:04:32.519421   27091 main.go:141] libmachine: (functional-764671) Calling .GetSSHHostname
I0914 17:04:32.522594   27091 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:32.522962   27091 main.go:141] libmachine: (functional-764671) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:5e:d5", ip: ""} in network mk-functional-764671: {Iface:virbr1 ExpiryTime:2024-09-14 18:01:46 +0000 UTC Type:0 Mac:52:54:00:b8:5e:d5 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-764671 Clientid:01:52:54:00:b8:5e:d5}
I0914 17:04:32.523000   27091 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:32.523123   27091 main.go:141] libmachine: (functional-764671) Calling .GetSSHPort
I0914 17:04:32.523277   27091 main.go:141] libmachine: (functional-764671) Calling .GetSSHKeyPath
I0914 17:04:32.523442   27091 main.go:141] libmachine: (functional-764671) Calling .GetSSHUsername
I0914 17:04:32.523630   27091 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/functional-764671/id_rsa Username:docker}
I0914 17:04:32.630193   27091 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 17:04:32.878075   27091 main.go:141] libmachine: Making call to close driver server
I0914 17:04:32.878093   27091 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:32.878402   27091 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:32.878418   27091 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 17:04:32.878457   27091 main.go:141] libmachine: Making call to close driver server
I0914 17:04:32.878472   27091 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:32.878703   27091 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:32.878713   27091 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
I0914 17:04:32.878722   27091 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764671 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-764671
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5203a3b6c67221e23fb95f9f026d90b5fd962439c00e326faebca9c330b779a8
repoDigests:
- localhost/minikube-local-cache-test@sha256:5a2a02a6c7044de1ff26cbb803dd4d0fcb7486cb1c64a7b2455924e56f7f63f0
repoTags:
- localhost/minikube-local-cache-test:functional-764671
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764671 image ls --format yaml --alsologtostderr:
I0914 17:04:31.940640   27028 out.go:345] Setting OutFile to fd 1 ...
I0914 17:04:31.940759   27028 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:31.940771   27028 out.go:358] Setting ErrFile to fd 2...
I0914 17:04:31.940776   27028 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:31.941030   27028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
I0914 17:04:31.941775   27028 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:31.941912   27028 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:31.942453   27028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:31.942501   27028 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:31.959081   27028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
I0914 17:04:31.959542   27028 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:31.960205   27028 main.go:141] libmachine: Using API Version  1
I0914 17:04:31.960244   27028 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:31.960541   27028 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:31.960734   27028 main.go:141] libmachine: (functional-764671) Calling .GetState
I0914 17:04:31.962579   27028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:31.962627   27028 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:31.979122   27028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
I0914 17:04:31.979580   27028 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:31.980115   27028 main.go:141] libmachine: Using API Version  1
I0914 17:04:31.980132   27028 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:31.980471   27028 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:31.980637   27028 main.go:141] libmachine: (functional-764671) Calling .DriverName
I0914 17:04:31.980843   27028 ssh_runner.go:195] Run: systemctl --version
I0914 17:04:31.980874   27028 main.go:141] libmachine: (functional-764671) Calling .GetSSHHostname
I0914 17:04:31.984526   27028 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:31.984857   27028 main.go:141] libmachine: (functional-764671) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:5e:d5", ip: ""} in network mk-functional-764671: {Iface:virbr1 ExpiryTime:2024-09-14 18:01:46 +0000 UTC Type:0 Mac:52:54:00:b8:5e:d5 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-764671 Clientid:01:52:54:00:b8:5e:d5}
I0914 17:04:31.984899   27028 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:31.985107   27028 main.go:141] libmachine: (functional-764671) Calling .GetSSHPort
I0914 17:04:31.985271   27028 main.go:141] libmachine: (functional-764671) Calling .GetSSHKeyPath
I0914 17:04:31.985445   27028 main.go:141] libmachine: (functional-764671) Calling .GetSSHUsername
I0914 17:04:31.985566   27028 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/functional-764671/id_rsa Username:docker}
I0914 17:04:32.099857   27028 ssh_runner.go:195] Run: sudo crictl images --output json
I0914 17:04:32.350707   27028 main.go:141] libmachine: Making call to close driver server
I0914 17:04:32.350723   27028 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:32.351043   27028 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:32.351057   27028 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
I0914 17:04:32.351072   27028 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 17:04:32.351082   27028 main.go:141] libmachine: Making call to close driver server
I0914 17:04:32.351091   27028 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:32.351293   27028 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:32.351305   27028 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-764671 ssh pgrep buildkitd: exit status 1 (241.853031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image build -t localhost/my-image:functional-764671 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 image build -t localhost/my-image:functional-764671 testdata/build --alsologtostderr: (10.526917386s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-764671 image build -t localhost/my-image:functional-764671 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> eb74c5d4765
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-764671
--> c696d5d50c6
Successfully tagged localhost/my-image:functional-764671
c696d5d50c6aa62442ab973df18d936684c0cb8acecbe5fbfb4af8f7d889b7a9
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-764671 image build -t localhost/my-image:functional-764671 testdata/build --alsologtostderr:
I0914 17:04:32.642273   27124 out.go:345] Setting OutFile to fd 1 ...
I0914 17:04:32.642421   27124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:32.642430   27124 out.go:358] Setting ErrFile to fd 2...
I0914 17:04:32.642435   27124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:04:32.642663   27124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
I0914 17:04:32.643320   27124 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:32.643842   27124 config.go:182] Loaded profile config "functional-764671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 17:04:32.644204   27124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:32.644255   27124 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:32.660270   27124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
I0914 17:04:32.660862   27124 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:32.661454   27124 main.go:141] libmachine: Using API Version  1
I0914 17:04:32.661481   27124 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:32.661857   27124 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:32.662034   27124 main.go:141] libmachine: (functional-764671) Calling .GetState
I0914 17:04:32.664165   27124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0914 17:04:32.664220   27124 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 17:04:32.680200   27124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
I0914 17:04:32.680699   27124 main.go:141] libmachine: () Calling .GetVersion
I0914 17:04:32.681177   27124 main.go:141] libmachine: Using API Version  1
I0914 17:04:32.681198   27124 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 17:04:32.681595   27124 main.go:141] libmachine: () Calling .GetMachineName
I0914 17:04:32.681775   27124 main.go:141] libmachine: (functional-764671) Calling .DriverName
I0914 17:04:32.682002   27124 ssh_runner.go:195] Run: systemctl --version
I0914 17:04:32.682030   27124 main.go:141] libmachine: (functional-764671) Calling .GetSSHHostname
I0914 17:04:32.684851   27124 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:32.685287   27124 main.go:141] libmachine: (functional-764671) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:5e:d5", ip: ""} in network mk-functional-764671: {Iface:virbr1 ExpiryTime:2024-09-14 18:01:46 +0000 UTC Type:0 Mac:52:54:00:b8:5e:d5 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-764671 Clientid:01:52:54:00:b8:5e:d5}
I0914 17:04:32.685320   27124 main.go:141] libmachine: (functional-764671) DBG | domain functional-764671 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:5e:d5 in network mk-functional-764671
I0914 17:04:32.685484   27124 main.go:141] libmachine: (functional-764671) Calling .GetSSHPort
I0914 17:04:32.685644   27124 main.go:141] libmachine: (functional-764671) Calling .GetSSHKeyPath
I0914 17:04:32.685767   27124 main.go:141] libmachine: (functional-764671) Calling .GetSSHUsername
I0914 17:04:32.685876   27124 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/functional-764671/id_rsa Username:docker}
I0914 17:04:32.815212   27124 build_images.go:161] Building image from path: /tmp/build.2974454250.tar
I0914 17:04:32.815277   27124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 17:04:32.838679   27124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2974454250.tar
I0914 17:04:32.850309   27124 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2974454250.tar: stat -c "%s %y" /var/lib/minikube/build/build.2974454250.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2974454250.tar': No such file or directory
I0914 17:04:32.850342   27124 ssh_runner.go:362] scp /tmp/build.2974454250.tar --> /var/lib/minikube/build/build.2974454250.tar (3072 bytes)
I0914 17:04:32.906039   27124 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2974454250
I0914 17:04:32.916040   27124 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2974454250 -xf /var/lib/minikube/build/build.2974454250.tar
I0914 17:04:32.942371   27124 crio.go:315] Building image: /var/lib/minikube/build/build.2974454250
I0914 17:04:32.942459   27124 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-764671 /var/lib/minikube/build/build.2974454250 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0914 17:04:43.097578   27124 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-764671 /var/lib/minikube/build/build.2974454250 --cgroup-manager=cgroupfs: (10.155075323s)
I0914 17:04:43.097667   27124 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2974454250
I0914 17:04:43.109250   27124 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2974454250.tar
I0914 17:04:43.120700   27124 build_images.go:217] Built localhost/my-image:functional-764671 from /tmp/build.2974454250.tar
I0914 17:04:43.120737   27124 build_images.go:133] succeeded building to: functional-764671
I0914 17:04:43.120744   27124 build_images.go:134] failed building to: 
I0914 17:04:43.120771   27124 main.go:141] libmachine: Making call to close driver server
I0914 17:04:43.120781   27124 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:43.121049   27124 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
I0914 17:04:43.121104   27124 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:43.121125   27124 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 17:04:43.121139   27124 main.go:141] libmachine: Making call to close driver server
I0914 17:04:43.121146   27124 main.go:141] libmachine: (functional-764671) Calling .Close
I0914 17:04:43.121340   27124 main.go:141] libmachine: (functional-764671) DBG | Closing plugin on server side
I0914 17:04:43.121351   27124 main.go:141] libmachine: Successfully made call to close driver server
I0914 17:04:43.121365   27124 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.736143874s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-764671
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image load --daemon kicbase/echo-server:functional-764671 --alsologtostderr
2024/09/14 17:04:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 image load --daemon kicbase/echo-server:functional-764671 --alsologtostderr: (1.675598196s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image load --daemon kicbase/echo-server:functional-764671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 update-context --alsologtostderr -v=2
E0914 17:04:29.487008   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-764671
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image load --daemon kicbase/echo-server:functional-764671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image save kicbase/echo-server:functional-764671 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-764671 image save kicbase/echo-server:functional-764671 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.665165726s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image rm kicbase/echo-server:functional-764671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-764671
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-764671 image save --daemon kicbase/echo-server:functional-764671 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-764671
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-764671
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-764671
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-764671
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-929592 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 17:06:45.625915   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:13.329090   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-929592 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.255597686s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-929592 -- rollout status deployment/busybox: (4.655522979s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-49mwg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-4gtfl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-kvmx7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-49mwg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-4gtfl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-kvmx7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-49mwg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-4gtfl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-kvmx7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-49mwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-49mwg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-4gtfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-4gtfl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-kvmx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929592 -- exec busybox-7dff88458-kvmx7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (82.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-929592 -v=7 --alsologtostderr
E0914 17:09:04.947466   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:04.953948   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:04.965440   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:04.986931   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:05.028370   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:05.109836   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:05.271407   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:05.593091   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:06.234897   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:07.516925   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:10.079058   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:15.200748   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:09:25.442608   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-929592 -v=7 --alsologtostderr: (1m21.365110868s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (82.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-929592 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp testdata/cp-test.txt ha-929592:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592:/home/docker/cp-test.txt ha-929592-m02:/home/docker/cp-test_ha-929592_ha-929592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test_ha-929592_ha-929592-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592:/home/docker/cp-test.txt ha-929592-m03:/home/docker/cp-test_ha-929592_ha-929592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test_ha-929592_ha-929592-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592:/home/docker/cp-test.txt ha-929592-m04:/home/docker/cp-test_ha-929592_ha-929592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test_ha-929592_ha-929592-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp testdata/cp-test.txt ha-929592-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m02:/home/docker/cp-test.txt ha-929592:/home/docker/cp-test_ha-929592-m02_ha-929592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test_ha-929592-m02_ha-929592.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m02:/home/docker/cp-test.txt ha-929592-m03:/home/docker/cp-test_ha-929592-m02_ha-929592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test_ha-929592-m02_ha-929592-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m02:/home/docker/cp-test.txt ha-929592-m04:/home/docker/cp-test_ha-929592-m02_ha-929592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test_ha-929592-m02_ha-929592-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp testdata/cp-test.txt ha-929592-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt ha-929592:/home/docker/cp-test_ha-929592-m03_ha-929592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test_ha-929592-m03_ha-929592.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt ha-929592-m02:/home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test_ha-929592-m03_ha-929592-m02.txt"
E0914 17:09:45.924280   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m03:/home/docker/cp-test.txt ha-929592-m04:/home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test_ha-929592-m03_ha-929592-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp testdata/cp-test.txt ha-929592-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile183020175/001/cp-test_ha-929592-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt ha-929592:/home/docker/cp-test_ha-929592-m04_ha-929592.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592 "sudo cat /home/docker/cp-test_ha-929592-m04_ha-929592.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt ha-929592-m02:/home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m02 "sudo cat /home/docker/cp-test_ha-929592-m04_ha-929592-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 cp ha-929592-m04:/home/docker/cp-test.txt ha-929592-m03:/home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 ssh -n ha-929592-m03 "sudo cat /home/docker/cp-test_ha-929592-m04_ha-929592-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.478872938s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 node delete m03 -v=7 --alsologtostderr
E0914 17:19:04.947453   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-929592 node delete m03 -v=7 --alsologtostderr: (16.056116401s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (316.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-929592 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 17:21:45.626320   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:24:04.947474   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:25:28.011215   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:26:45.625612   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-929592 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m15.63480033s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (316.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-929592 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-929592 --control-plane -v=7 --alsologtostderr: (1m15.932313004s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-929592 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-877260 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0914 17:29:04.947026   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-877260 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.192495865s)
--- PASS: TestJSONOutput/start/Command (78.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-877260 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-877260 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-877260 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-877260 --output=json --user=testUser: (6.672618062s)
--- PASS: TestJSONOutput/stop/Command (6.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-098852 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-098852 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.669222ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"25315322-7a7c-4baf-ae45-2bbf6168183a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-098852] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7ffeb17-447e-4aa3-b560-2ea8c688116a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"e3566a0f-4d65-4305-ac95-e3b781dc1b16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3e99fe2b-0ec7-4570-a287-6c26d92051e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig"}}
	{"specversion":"1.0","id":"7e9e58ca-36ea-4002-90e8-a9c4667903f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube"}}
	{"specversion":"1.0","id":"a32c9bc2-321e-4ce7-90dc-b4a877b2a0a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"32d54923-5569-4684-9445-8ad04f6fca72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6d8a080a-102c-44fa-b4a9-ed9dbb5a1349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-098852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-098852
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-243812 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-243812 --driver=kvm2  --container-runtime=crio: (41.446201435s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-255604 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-255604 --driver=kvm2  --container-runtime=crio: (43.110483189s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-243812
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-255604
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-255604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-255604
helpers_test.go:175: Cleaning up "first-243812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-243812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-243812: (1.005041766s)
--- PASS: TestMinikubeProfile (87.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-502410 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-502410 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.824532013s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-502410 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-502410 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513338 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0914 17:31:45.625259   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513338 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.20212242s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-502410 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-513338
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-513338: (1.271057944s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513338
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513338: (21.021096787s)
--- PASS: TestMountStart/serial/RestartStopped (22.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396884 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 17:34:04.947457   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396884 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.205045764s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-396884 -- rollout status deployment/busybox: (5.166220081s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-ph5h7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-pzr7k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-ph5h7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-pzr7k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-ph5h7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-pzr7k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-ph5h7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-ph5h7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-pzr7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396884 -- exec busybox-7dff88458-pzr7k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-396884 -v 3 --alsologtostderr
E0914 17:34:48.693427   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-396884 -v 3 --alsologtostderr: (49.538886965s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-396884 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp testdata/cp-test.txt multinode-396884:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3813016810/001/cp-test_multinode-396884.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884:/home/docker/cp-test.txt multinode-396884-m02:/home/docker/cp-test_multinode-396884_multinode-396884-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m02 "sudo cat /home/docker/cp-test_multinode-396884_multinode-396884-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884:/home/docker/cp-test.txt multinode-396884-m03:/home/docker/cp-test_multinode-396884_multinode-396884-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m03 "sudo cat /home/docker/cp-test_multinode-396884_multinode-396884-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp testdata/cp-test.txt multinode-396884-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3813016810/001/cp-test_multinode-396884-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt multinode-396884:/home/docker/cp-test_multinode-396884-m02_multinode-396884.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884 "sudo cat /home/docker/cp-test_multinode-396884-m02_multinode-396884.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884-m02:/home/docker/cp-test.txt multinode-396884-m03:/home/docker/cp-test_multinode-396884-m02_multinode-396884-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m03 "sudo cat /home/docker/cp-test_multinode-396884-m02_multinode-396884-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp testdata/cp-test.txt multinode-396884-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3813016810/001/cp-test_multinode-396884-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt multinode-396884:/home/docker/cp-test_multinode-396884-m03_multinode-396884.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884 "sudo cat /home/docker/cp-test_multinode-396884-m03_multinode-396884.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 cp multinode-396884-m03:/home/docker/cp-test.txt multinode-396884-m02:/home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 ssh -n multinode-396884-m02 "sudo cat /home/docker/cp-test_multinode-396884-m03_multinode-396884-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-396884 node stop m03: (1.363914992s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396884 status: exit status 7 (414.089969ms)

                                                
                                                
-- stdout --
	multinode-396884
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396884-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396884-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396884 status --alsologtostderr: exit status 7 (417.426332ms)

                                                
                                                
-- stdout --
	multinode-396884
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396884-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396884-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:35:26.598544   44898 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:35:26.598819   44898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:35:26.598829   44898 out.go:358] Setting ErrFile to fd 2...
	I0914 17:35:26.598833   44898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:35:26.599066   44898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:35:26.599280   44898 out.go:352] Setting JSON to false
	I0914 17:35:26.599307   44898 mustload.go:65] Loading cluster: multinode-396884
	I0914 17:35:26.599355   44898 notify.go:220] Checking for updates...
	I0914 17:35:26.599726   44898 config.go:182] Loaded profile config "multinode-396884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:35:26.599741   44898 status.go:255] checking status of multinode-396884 ...
	I0914 17:35:26.600165   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.600227   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.618228   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0914 17:35:26.618726   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.619353   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.619379   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.619731   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.619921   44898 main.go:141] libmachine: (multinode-396884) Calling .GetState
	I0914 17:35:26.621805   44898 status.go:330] multinode-396884 host status = "Running" (err=<nil>)
	I0914 17:35:26.621819   44898 host.go:66] Checking if "multinode-396884" exists ...
	I0914 17:35:26.622113   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.622169   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.637532   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0914 17:35:26.637920   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.638489   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.638518   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.638817   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.638997   44898 main.go:141] libmachine: (multinode-396884) Calling .GetIP
	I0914 17:35:26.641621   44898 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:35:26.642012   44898 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:35:26.642030   44898 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:35:26.642269   44898 host.go:66] Checking if "multinode-396884" exists ...
	I0914 17:35:26.642596   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.642646   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.658346   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I0914 17:35:26.658745   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.659232   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.659250   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.659526   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.659697   44898 main.go:141] libmachine: (multinode-396884) Calling .DriverName
	I0914 17:35:26.659881   44898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:35:26.659906   44898 main.go:141] libmachine: (multinode-396884) Calling .GetSSHHostname
	I0914 17:35:26.662767   44898 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:35:26.663198   44898 main.go:141] libmachine: (multinode-396884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:2b:08", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:32:44 +0000 UTC Type:0 Mac:52:54:00:6d:2b:08 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:multinode-396884 Clientid:01:52:54:00:6d:2b:08}
	I0914 17:35:26.663224   44898 main.go:141] libmachine: (multinode-396884) DBG | domain multinode-396884 has defined IP address 192.168.39.202 and MAC address 52:54:00:6d:2b:08 in network mk-multinode-396884
	I0914 17:35:26.663389   44898 main.go:141] libmachine: (multinode-396884) Calling .GetSSHPort
	I0914 17:35:26.663596   44898 main.go:141] libmachine: (multinode-396884) Calling .GetSSHKeyPath
	I0914 17:35:26.663732   44898 main.go:141] libmachine: (multinode-396884) Calling .GetSSHUsername
	I0914 17:35:26.663924   44898 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884/id_rsa Username:docker}
	I0914 17:35:26.745973   44898 ssh_runner.go:195] Run: systemctl --version
	I0914 17:35:26.751772   44898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:35:26.766384   44898 kubeconfig.go:125] found "multinode-396884" server: "https://192.168.39.202:8443"
	I0914 17:35:26.766423   44898 api_server.go:166] Checking apiserver status ...
	I0914 17:35:26.766469   44898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:35:26.781770   44898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1073/cgroup
	W0914 17:35:26.791521   44898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1073/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0914 17:35:26.791574   44898 ssh_runner.go:195] Run: ls
	I0914 17:35:26.796392   44898 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I0914 17:35:26.801240   44898 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I0914 17:35:26.801270   44898 status.go:422] multinode-396884 apiserver status = Running (err=<nil>)
	I0914 17:35:26.801279   44898 status.go:257] multinode-396884 status: &{Name:multinode-396884 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:35:26.801295   44898 status.go:255] checking status of multinode-396884-m02 ...
	I0914 17:35:26.801589   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.801625   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.816784   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0914 17:35:26.817267   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.817856   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.817878   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.818254   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.818500   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .GetState
	I0914 17:35:26.820207   44898 status.go:330] multinode-396884-m02 host status = "Running" (err=<nil>)
	I0914 17:35:26.820221   44898 host.go:66] Checking if "multinode-396884-m02" exists ...
	I0914 17:35:26.820530   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.820569   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.835548   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0914 17:35:26.835939   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.836463   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.836488   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.836775   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.836943   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .GetIP
	I0914 17:35:26.840013   44898 main.go:141] libmachine: (multinode-396884-m02) DBG | domain multinode-396884-m02 has defined MAC address 52:54:00:a9:17:df in network mk-multinode-396884
	I0914 17:35:26.840564   44898 main.go:141] libmachine: (multinode-396884-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:17:df", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:a9:17:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-396884-m02 Clientid:01:52:54:00:a9:17:df}
	I0914 17:35:26.840589   44898 main.go:141] libmachine: (multinode-396884-m02) DBG | domain multinode-396884-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:a9:17:df in network mk-multinode-396884
	I0914 17:35:26.840796   44898 host.go:66] Checking if "multinode-396884-m02" exists ...
	I0914 17:35:26.841081   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.841116   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.856357   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0914 17:35:26.856820   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.857324   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.857348   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.857667   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.857874   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .DriverName
	I0914 17:35:26.858048   44898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:35:26.858071   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .GetSSHHostname
	I0914 17:35:26.860815   44898 main.go:141] libmachine: (multinode-396884-m02) DBG | domain multinode-396884-m02 has defined MAC address 52:54:00:a9:17:df in network mk-multinode-396884
	I0914 17:35:26.861174   44898 main.go:141] libmachine: (multinode-396884-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:17:df", ip: ""} in network mk-multinode-396884: {Iface:virbr1 ExpiryTime:2024-09-14 18:33:43 +0000 UTC Type:0 Mac:52:54:00:a9:17:df Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-396884-m02 Clientid:01:52:54:00:a9:17:df}
	I0914 17:35:26.861215   44898 main.go:141] libmachine: (multinode-396884-m02) DBG | domain multinode-396884-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:a9:17:df in network mk-multinode-396884
	I0914 17:35:26.861353   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .GetSSHPort
	I0914 17:35:26.861501   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .GetSSHKeyPath
	I0914 17:35:26.861625   44898 main.go:141] libmachine: (multinode-396884-m02) Calling .GetSSHUsername
	I0914 17:35:26.861735   44898 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19643-8806/.minikube/machines/multinode-396884-m02/id_rsa Username:docker}
	I0914 17:35:26.940827   44898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:35:26.955177   44898 status.go:257] multinode-396884-m02 status: &{Name:multinode-396884-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:35:26.955213   44898 status.go:255] checking status of multinode-396884-m03 ...
	I0914 17:35:26.955558   44898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0914 17:35:26.955609   44898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 17:35:26.971169   44898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I0914 17:35:26.971614   44898 main.go:141] libmachine: () Calling .GetVersion
	I0914 17:35:26.972191   44898 main.go:141] libmachine: Using API Version  1
	I0914 17:35:26.972213   44898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 17:35:26.972556   44898 main.go:141] libmachine: () Calling .GetMachineName
	I0914 17:35:26.972743   44898 main.go:141] libmachine: (multinode-396884-m03) Calling .GetState
	I0914 17:35:26.974179   44898 status.go:330] multinode-396884-m03 host status = "Stopped" (err=<nil>)
	I0914 17:35:26.974193   44898 status.go:343] host is not running, skipping remaining checks
	I0914 17:35:26.974201   44898 status.go:257] multinode-396884-m03 status: &{Name:multinode-396884-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-396884 node start m03 -v=7 --alsologtostderr: (38.356694397s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-396884 node delete m03: (1.700735009s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (200.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396884 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0914 17:44:04.947520   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:46:45.626011   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396884 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.012400245s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396884 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (200.53s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396884
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396884-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-396884-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.196458ms)

                                                
                                                
-- stdout --
	* [multinode-396884-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-396884-m02' is duplicated with machine name 'multinode-396884-m02' in profile 'multinode-396884'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396884-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396884-m03 --driver=kvm2  --container-runtime=crio: (39.521654192s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-396884
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-396884: exit status 80 (215.756665ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-396884 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-396884-m03 already exists in multinode-396884-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-396884-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.64s)

                                                
                                    
x
+
TestScheduledStopUnix (109.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-883374 --memory=2048 --driver=kvm2  --container-runtime=crio
E0914 17:51:45.626310   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-883374 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.126195822s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883374 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-883374 -n scheduled-stop-883374
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-883374 -n scheduled-stop-883374: exit status 85 (53.585419ms)

                                                
                                                
-- stdout --
	* Profile "scheduled-stop-883374" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p scheduled-stop-883374"

                                                
                                                
-- /stdout --
scheduled_stop_test.go:191: status error: exit status 85 (may be ok)
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883374 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883374 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-883374 -n scheduled-stop-883374
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-883374
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-883374 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-883374
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-883374: exit status 7 (62.171943ms)

                                                
                                                
-- stdout --
	scheduled-stop-883374
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-883374 -n scheduled-stop-883374
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-883374 -n scheduled-stop-883374: exit status 7 (61.850024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-883374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-883374
--- PASS: TestScheduledStopUnix (109.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.517373037 start -p running-upgrade-714252 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.517373037 start -p running-upgrade-714252 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.16920476s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-714252 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-714252 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.330302175s)
helpers_test.go:175: Cleaning up "running-upgrade-714252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-714252
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-714252: (1.163873928s)
--- PASS: TestRunningBinaryUpgrade (211.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710005 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-710005 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.79457ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-710005] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710005 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710005 --driver=kvm2  --container-runtime=crio: (1m29.807222421s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-710005 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.08s)

                                                
                                    
x
+
TestPause/serial/Start (126.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-962663 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0914 17:54:04.947450   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-962663 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m6.112922713s)
--- PASS: TestPause/serial/Start (126.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710005 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710005 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.020613352s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-710005 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-710005 status -o json: exit status 2 (262.468968ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-710005","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-710005
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710005 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710005 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.297151481s)
--- PASS: TestNoKubernetes/serial/Start (52.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-962663 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-962663 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.574972604s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-710005 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-710005 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.18429ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.608131098s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0914 17:56:45.625993   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (18.736605053s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-710005
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-710005: (1.294928406s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710005 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710005 --driver=kvm2  --container-runtime=crio: (39.456297557s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-962663 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-962663 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-962663 --output=json --layout=cluster: exit status 2 (238.952621ms)

                                                
                                                
-- stdout --
	{"Name":"pause-962663","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-962663","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-962663 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-962663 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-962663 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-710005 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-710005 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.39495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-691590 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-691590 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (107.611414ms)

                                                
                                                
-- stdout --
	* [false-691590] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:57:39.611468   56502 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:57:39.611589   56502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:57:39.611599   56502 out.go:358] Setting ErrFile to fd 2...
	I0914 17:57:39.611604   56502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:57:39.611786   56502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8806/.minikube/bin
	I0914 17:57:39.612358   56502 out.go:352] Setting JSON to false
	I0914 17:57:39.613348   56502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6004,"bootTime":1726330656,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:57:39.613449   56502 start.go:139] virtualization: kvm guest
	I0914 17:57:39.615805   56502 out.go:177] * [false-691590] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0914 17:57:39.617065   56502 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:57:39.617113   56502 notify.go:220] Checking for updates...
	I0914 17:57:39.619983   56502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:57:39.621557   56502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8806/kubeconfig
	I0914 17:57:39.622979   56502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8806/.minikube
	I0914 17:57:39.624198   56502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:57:39.625391   56502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:57:39.627187   56502 config.go:182] Loaded profile config "cert-expiration-724454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:57:39.627279   56502 config.go:182] Loaded profile config "cert-options-476980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 17:57:39.627381   56502 config.go:182] Loaded profile config "kubernetes-upgrade-470019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0914 17:57:39.627475   56502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:57:39.661695   56502 out.go:177] * Using the kvm2 driver based on user configuration
	I0914 17:57:39.662925   56502 start.go:297] selected driver: kvm2
	I0914 17:57:39.662943   56502 start.go:901] validating driver "kvm2" against <nil>
	I0914 17:57:39.662955   56502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:57:39.665003   56502 out.go:201] 
	W0914 17:57:39.666336   56502 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0914 17:57:39.667607   56502 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-691590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-691590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:56:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.177:8443
name: cert-expiration-724454
contexts:
- context:
cluster: cert-expiration-724454
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:56:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-724454
name: cert-expiration-724454
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-724454
user:
client-certificate: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/cert-expiration-724454/client.crt
client-key: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/cert-expiration-724454/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-691590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691590"

                                                
                                                
----------------------- debugLogs end: false-691590 [took: 2.758488049s] --------------------------------
helpers_test.go:175: Cleaning up "false-691590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-691590
--- PASS: TestNetworkPlugins/group/false (3.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (131.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1759664837 start -p stopped-upgrade-319416 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1759664837 start -p stopped-upgrade-319416 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m23.928728551s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1759664837 -p stopped-upgrade-319416 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1759664837 -p stopped-upgrade-319416 stop: (2.160431363s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-319416 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-319416 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.172115184s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (131.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-168587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-168587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m11.251573426s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-319416
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-044534 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-044534 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m21.787593211s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-168587 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e77d502-9026-493c-9742-a7c7577960f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e77d502-9026-493c-9742-a7c7577960f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0043119s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-168587 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-168587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-168587 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-044534 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5f084fa8-df8a-4ac9-b0cf-75e3e3de2d03] Pending
helpers_test.go:344: "busybox" [5f084fa8-df8a-4ac9-b0cf-75e3e3de2d03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5f084fa8-df8a-4ac9-b0cf-75e3e3de2d03] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003855852s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-044534 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-044534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-044534 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-243449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-243449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (56.620439267s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (680.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-168587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-168587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m19.879803321s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-168587 -n no-preload-168587
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (680.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2] Pending
helpers_test.go:344: "busybox" [fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd23d806-4c0f-41fe-b5d0-4f5ee2f395f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003591052s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (572.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-044534 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 18:04:04.946757   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-044534 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m31.996142941s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-044534 -n embed-certs-044534
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (572.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-243449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-243449 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-556121 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-556121 --alsologtostderr -v=3: (4.287448767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-556121 -n old-k8s-version-556121: exit status 7 (62.340013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-556121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (423.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-243449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 18:06:45.625251   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:08:08.696759   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:09:04.947185   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:11:45.625354   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/addons-996992/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-243449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (7m3.555877556s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (423.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-019918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 18:29:04.947231   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-019918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.467396611s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.558821748s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-019918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-019918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197729341s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-019918 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-019918 --alsologtostderr -v=3: (10.482037814s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-019918 -n newest-cni-019918
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-019918 -n newest-cni-019918: exit status 7 (81.953931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-019918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-019918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-019918 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (44.732221911s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-019918 -n newest-cni-019918
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m16.261658885s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-019918 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-019918 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-019918 -n newest-cni-019918
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-019918 -n newest-cni-019918: exit status 2 (242.554624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-019918 -n newest-cni-019918
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-019918 -n newest-cni-019918: exit status 2 (238.181163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-019918 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-019918 -n newest-cni-019918
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-019918 -n newest-cni-019918
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0914 18:30:43.342317   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.348646   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.359996   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.381352   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.423563   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.505556   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.667826   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:43.989912   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:44.631386   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:45.912779   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:48.474221   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:53.595814   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m25.683023176s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mng65" [34bc052a-518e-4041-b0d6-69b077202438] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 18:31:03.837638   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mng65" [34bc052a-518e-4041-b0d6-69b077202438] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004227996s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bt8x2" [d1ea85d5-5694-4551-81cb-a1af73dbb6b5] Running
E0914 18:31:24.319439   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003714954s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m0.070722933s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-psxvg" [dfdeb22a-d0ee-46ba-a147-921ab20a77d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-psxvg" [dfdeb22a-d0ee-46ba-a147-921ab20a77d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004180316s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-243449 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-243449 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449: exit status 2 (292.984885ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449: exit status 2 (283.909564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-243449 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-243449 -n default-k8s-diff-port-243449
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.30s)
E0914 18:33:16.597129   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m29.793054228s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0914 18:31:59.790554   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:32:04.912161   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m36.027942024s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-272kr" [d26bd6c8-61f8-46f9-978b-f479a9445f53] Running
E0914 18:32:05.281421   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:32:08.018778   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/functional-764671/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005587069s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2hnhw" [ae92a49f-1707-4c3c-b814-a69cb51417f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 18:32:15.153920   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2hnhw" [ae92a49f-1707-4c3c-b814-a69cb51417f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00419984s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f85ph" [5fc54004-eb0c-4568-babd-d62573274327] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f85ph" [5fc54004-eb0c-4568-babd-d62573274327] Running
E0914 18:32:35.635855   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/old-k8s-version-556121/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004271864s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-691590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m5.951828711s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x9dwb" [5f530fbd-1423-41b9-9b0b-2a9f16e05fd1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 18:33:27.202828   16016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/no-preload-168587/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-x9dwb" [5f530fbd-1423-41b9-9b0b-2a9f16e05fd1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00392635s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nkrlv" [7cd96291-da1d-4e7c-be0d-fb2df34919e7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003981154s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-22gtr" [681e7708-8ce5-4c78-98b0-345fc74895d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-22gtr" [681e7708-8ce5-4c78-98b0-345fc74895d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003392309s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-691590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-691590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pkwb7" [f9783d89-152e-4f68-bdc7-5163a756c221] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pkwb7" [f9783d89-152e-4f68-bdc7-5163a756c221] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003626033s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-691590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-691590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (37/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
258 TestStartStop/group/disable-driver-mounts 0.14
278 TestNetworkPlugins/group/kubenet 3.05
286 TestNetworkPlugins/group/cilium 3.39
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-444413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-444413
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-691590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-691590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:56:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.177:8443
name: cert-expiration-724454
contexts:
- context:
cluster: cert-expiration-724454
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:56:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-724454
name: cert-expiration-724454
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-724454
user:
client-certificate: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/cert-expiration-724454/client.crt
client-key: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/cert-expiration-724454/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-691590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691590"

                                                
                                                
----------------------- debugLogs end: kubenet-691590 [took: 2.897491003s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-691590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-691590
--- SKIP: TestNetworkPlugins/group/kubenet (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-691590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-691590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19643-8806/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:56:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.177:8443
name: cert-expiration-724454
contexts:
- context:
cluster: cert-expiration-724454
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:56:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-724454
name: cert-expiration-724454
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-724454
user:
client-certificate: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/cert-expiration-724454/client.crt
client-key: /home/jenkins/minikube-integration/19643-8806/.minikube/profiles/cert-expiration-724454/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-691590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-691590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691590"

                                                
                                                
----------------------- debugLogs end: cilium-691590 [took: 3.225246563s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-691590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-691590
--- SKIP: TestNetworkPlugins/group/cilium (3.39s)

                                                
                                    
Copied to clipboard